00:00:00.001 Started by upstream project "autotest-per-patch" build number 132313 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 25758 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.086 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.086 The recommended git tool is: git 00:00:00.087 using credential 00000000-0000-0000-0000-000000000002 00:00:00.088 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.120 Fetching changes from the remote Git repository 00:00:00.122 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.163 Using shallow fetch with depth 1 00:00:00.163 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.163 > git --version # timeout=10 00:00:00.196 > git --version # 'git version 2.39.2' 00:00:00.196 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.218 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.218 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/42/25142/7 # timeout=5 00:00:05.782 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.796 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.808 Checking out Revision 57f57becdd59ebf8d101f9f2c9d245f4119bdd5d (FETCH_HEAD) 00:00:05.808 > git config core.sparsecheckout # timeout=10 00:00:05.820 > git read-tree -mu HEAD # timeout=10 00:00:05.852 > git checkout -f 57f57becdd59ebf8d101f9f2c9d245f4119bdd5d # timeout=5 00:00:05.879 Commit message: "jenkins/jjb-config: Use dedicated image version for LTS builds" 00:00:05.879 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.047 [Pipeline] Start of Pipeline 00:00:06.060 [Pipeline] library 00:00:06.061 Loading library shm_lib@master 00:00:06.061 Library shm_lib@master is cached. Copying from home. 00:00:06.078 [Pipeline] node 00:00:06.102 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.104 [Pipeline] { 00:00:06.113 [Pipeline] } 00:00:06.128 [Pipeline] // node 00:00:06.133 [Pipeline] node 00:00:06.138 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.140 [Pipeline] { 00:00:06.148 [Pipeline] catchError 00:00:06.149 [Pipeline] { 00:00:06.159 [Pipeline] wrap 00:00:06.165 [Pipeline] { 00:00:06.173 [Pipeline] stage 00:00:06.175 [Pipeline] { (Prologue) 00:00:06.381 [Pipeline] sh 00:00:07.193 + logger -p user.info -t JENKINS-CI 00:00:07.220 [Pipeline] echo 00:00:07.222 Node: CYP9 00:00:07.231 [Pipeline] sh 00:00:07.597 [Pipeline] setCustomBuildProperty 00:00:07.612 [Pipeline] echo 00:00:07.613 Cleanup processes 00:00:07.619 [Pipeline] sh 00:00:07.917 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.917 5249 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.934 [Pipeline] sh 00:00:08.233 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.233 ++ grep -v 'sudo pgrep' 00:00:08.233 ++ awk '{print $1}' 00:00:08.233 + sudo kill -9 00:00:08.233 + true 00:00:08.251 [Pipeline] cleanWs 00:00:08.262 [WS-CLEANUP] Deleting project workspace... 00:00:08.262 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.275 [WS-CLEANUP] done 00:00:08.279 [Pipeline] setCustomBuildProperty 00:00:08.295 [Pipeline] sh 00:00:08.586 + sudo git config --global --replace-all safe.directory '*' 00:00:08.677 [Pipeline] httpRequest 00:00:11.585 [Pipeline] echo 00:00:11.586 Sorcerer 10.211.164.20 is alive 00:00:11.596 [Pipeline] retry 00:00:11.599 [Pipeline] { 00:00:11.613 [Pipeline] httpRequest 00:00:11.619 HttpMethod: GET 00:00:11.619 URL: http://10.211.164.20/packages/jbp_57f57becdd59ebf8d101f9f2c9d245f4119bdd5d.tar.gz 00:00:11.620 Sending request to url: http://10.211.164.20/packages/jbp_57f57becdd59ebf8d101f9f2c9d245f4119bdd5d.tar.gz 00:00:11.625 Response Code: HTTP/1.1 200 OK 00:00:11.626 Success: Status code 200 is in the accepted range: 200,404 00:00:11.626 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_57f57becdd59ebf8d101f9f2c9d245f4119bdd5d.tar.gz 00:00:12.781 [Pipeline] } 00:00:12.800 [Pipeline] // retry 00:00:12.808 [Pipeline] sh 00:00:13.103 + tar --no-same-owner -xf jbp_57f57becdd59ebf8d101f9f2c9d245f4119bdd5d.tar.gz 00:00:13.123 [Pipeline] httpRequest 00:00:13.526 [Pipeline] echo 00:00:13.528 Sorcerer 10.211.164.20 is alive 00:00:13.538 [Pipeline] retry 00:00:13.541 [Pipeline] { 00:00:13.555 [Pipeline] httpRequest 00:00:13.561 HttpMethod: GET 00:00:13.561 URL: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:13.563 Sending request to url: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:13.582 Response Code: HTTP/1.1 200 OK 00:00:13.582 Success: Status code 200 is in the accepted range: 200,404 00:00:13.582 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:03:01.774 [Pipeline] } 00:03:01.794 [Pipeline] // retry 00:03:01.802 [Pipeline] sh 00:03:02.110 + tar --no-same-owner -xf spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:03:04.681 [Pipeline] sh 00:03:04.979 + git -C spdk log --oneline -n5 00:03:04.979 d47eb51c9 bdev: fix a race between reset start and complete 00:03:04.979 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:03:04.979 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:03:04.979 4bcab9fb9 correct kick for CQ full case 00:03:04.979 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:03:04.993 [Pipeline] } 00:03:05.008 [Pipeline] // stage 00:03:05.017 [Pipeline] stage 00:03:05.019 [Pipeline] { (Prepare) 00:03:05.037 [Pipeline] writeFile 00:03:05.056 [Pipeline] sh 00:03:05.350 + logger -p user.info -t JENKINS-CI 00:03:05.365 [Pipeline] sh 00:03:05.660 + logger -p user.info -t JENKINS-CI 00:03:05.675 [Pipeline] sh 00:03:05.975 + cat autorun-spdk.conf 00:03:05.975 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:05.975 SPDK_TEST_NVMF=1 00:03:05.975 SPDK_TEST_NVME_CLI=1 00:03:05.975 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:05.975 SPDK_TEST_NVMF_NICS=e810 00:03:05.975 SPDK_TEST_VFIOUSER=1 00:03:05.975 SPDK_RUN_UBSAN=1 00:03:05.975 NET_TYPE=phy 00:03:05.985 RUN_NIGHTLY=0 00:03:05.990 [Pipeline] readFile 00:03:06.046 [Pipeline] withEnv 00:03:06.048 [Pipeline] { 00:03:06.061 [Pipeline] sh 00:03:06.358 + set -ex 00:03:06.358 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:06.358 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:06.358 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:06.358 ++ SPDK_TEST_NVMF=1 00:03:06.358 ++ SPDK_TEST_NVME_CLI=1 00:03:06.358 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:06.358 ++ SPDK_TEST_NVMF_NICS=e810 00:03:06.358 ++ SPDK_TEST_VFIOUSER=1 00:03:06.358 ++ SPDK_RUN_UBSAN=1 00:03:06.358 ++ NET_TYPE=phy 00:03:06.358 ++ RUN_NIGHTLY=0 00:03:06.358 + case $SPDK_TEST_NVMF_NICS in 00:03:06.358 + DRIVERS=ice 00:03:06.358 + [[ tcp == \r\d\m\a ]] 00:03:06.358 + [[ -n ice ]] 00:03:06.358 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:06.358 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:12.964 rmmod: ERROR: Module irdma is not currently loaded 00:03:12.964 rmmod: ERROR: Module i40iw is not currently loaded 00:03:12.964 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:12.964 + true 00:03:12.964 + for D in $DRIVERS 00:03:12.964 + sudo modprobe ice 00:03:12.964 + exit 0 00:03:12.976 [Pipeline] } 00:03:12.992 [Pipeline] // withEnv 00:03:12.997 [Pipeline] } 00:03:13.012 [Pipeline] // stage 00:03:13.023 [Pipeline] catchError 00:03:13.025 [Pipeline] { 00:03:13.039 [Pipeline] timeout 00:03:13.039 Timeout set to expire in 1 hr 0 min 00:03:13.041 [Pipeline] { 00:03:13.055 [Pipeline] stage 00:03:13.057 [Pipeline] { (Tests) 00:03:13.071 [Pipeline] sh 00:03:13.366 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:13.366 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:13.366 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:13.366 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:13.366 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:13.366 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:13.366 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:13.366 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:13.366 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:13.366 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:13.366 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:13.366 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:13.366 + source /etc/os-release 00:03:13.366 ++ NAME='Fedora Linux' 00:03:13.366 ++ VERSION='39 (Cloud Edition)' 00:03:13.366 ++ ID=fedora 00:03:13.366 ++ VERSION_ID=39 00:03:13.366 ++ VERSION_CODENAME= 00:03:13.366 ++ PLATFORM_ID=platform:f39 00:03:13.366 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:13.366 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:13.366 ++ LOGO=fedora-logo-icon 00:03:13.366 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:13.366 ++ HOME_URL=https://fedoraproject.org/ 00:03:13.366 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:13.366 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:13.366 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:13.366 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:13.366 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:13.366 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:13.366 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:13.366 ++ SUPPORT_END=2024-11-12 00:03:13.366 ++ VARIANT='Cloud Edition' 00:03:13.366 ++ VARIANT_ID=cloud 00:03:13.366 + uname -a 00:03:13.366 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:13.366 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:16.674 Hugepages 00:03:16.674 node hugesize free / total 00:03:16.674 node0 1048576kB 0 / 0 00:03:16.674 node0 2048kB 0 / 0 00:03:16.674 node1 1048576kB 0 / 0 00:03:16.674 node1 2048kB 0 / 0 00:03:16.674 00:03:16.674 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:16.674 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:16.674 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:16.674 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:16.674 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:16.674 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:16.674 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:16.674 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:16.674 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:16.674 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:16.674 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:16.674 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:16.674 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:16.674 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:16.674 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:16.674 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:16.674 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:16.674 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:16.674 + rm -f /tmp/spdk-ld-path 00:03:16.674 + source autorun-spdk.conf 00:03:16.674 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:16.674 ++ SPDK_TEST_NVMF=1 00:03:16.674 ++ SPDK_TEST_NVME_CLI=1 00:03:16.674 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:16.674 ++ SPDK_TEST_NVMF_NICS=e810 00:03:16.674 ++ SPDK_TEST_VFIOUSER=1 00:03:16.674 ++ SPDK_RUN_UBSAN=1 00:03:16.674 ++ NET_TYPE=phy 00:03:16.674 ++ RUN_NIGHTLY=0 00:03:16.674 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:16.674 + [[ -n '' ]] 00:03:16.674 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:16.674 + for M in /var/spdk/build-*-manifest.txt 00:03:16.674 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:16.674 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:16.674 + for M in /var/spdk/build-*-manifest.txt 00:03:16.674 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:16.674 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:16.674 + for M in /var/spdk/build-*-manifest.txt 00:03:16.674 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:16.674 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:16.674 ++ uname 00:03:16.674 + [[ Linux == \L\i\n\u\x ]] 00:03:16.674 + sudo dmesg -T 00:03:16.674 + sudo dmesg --clear 00:03:16.674 + dmesg_pid=6897 00:03:16.674 + [[ Fedora Linux == FreeBSD ]] 00:03:16.674 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:16.674 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:16.674 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:16.674 + sudo dmesg -Tw 00:03:16.674 + [[ -x /usr/src/fio-static/fio ]] 00:03:16.674 + export FIO_BIN=/usr/src/fio-static/fio 00:03:16.674 + FIO_BIN=/usr/src/fio-static/fio 00:03:16.674 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:16.674 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:16.674 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:16.674 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:16.674 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:16.674 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:16.674 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:16.674 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:16.675 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:16.675 09:21:03 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:16.675 09:21:03 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:16.675 09:21:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:16.675 09:21:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:16.675 09:21:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:16.675 09:21:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:16.675 09:21:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:03:16.675 09:21:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:03:16.675 09:21:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:16.675 09:21:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:03:16.675 09:21:03 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:16.675 09:21:03 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:16.675 09:21:03 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:16.675 09:21:03 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:16.675 09:21:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:16.675 09:21:03 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:16.675 09:21:03 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:16.675 09:21:03 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:16.675 09:21:03 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:16.675 09:21:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.675 09:21:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.675 09:21:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.675 09:21:03 -- paths/export.sh@5 -- $ export PATH 00:03:16.675 09:21:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.937 09:21:03 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:16.937 09:21:03 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:16.937 09:21:03 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732004463.XXXXXX 00:03:16.937 09:21:03 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732004463.GobGIU 00:03:16.937 09:21:03 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:16.937 09:21:03 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:03:16.937 09:21:03 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:16.937 09:21:03 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:16.937 09:21:03 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:16.937 09:21:03 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:16.937 09:21:03 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:16.937 09:21:03 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.937 09:21:03 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:16.937 09:21:03 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:16.937 09:21:03 -- pm/common@17 -- $ local monitor 00:03:16.937 09:21:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.937 09:21:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.937 09:21:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.937 09:21:03 -- pm/common@21 -- $ date +%s 00:03:16.937 09:21:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.937 09:21:03 -- pm/common@25 -- $ sleep 1 00:03:16.937 09:21:03 -- pm/common@21 -- $ date +%s 00:03:16.937 09:21:03 -- pm/common@21 -- $ date +%s 00:03:16.937 09:21:03 -- pm/common@21 -- $ date +%s 00:03:16.937 09:21:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732004463 00:03:16.937 09:21:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732004463 00:03:16.937 09:21:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732004463 00:03:16.937 09:21:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732004463 00:03:16.937 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732004463_collect-cpu-load.pm.log 00:03:16.937 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732004463_collect-vmstat.pm.log 00:03:16.937 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732004463_collect-cpu-temp.pm.log 00:03:16.937 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732004463_collect-bmc-pm.bmc.pm.log 00:03:17.884 09:21:04 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:17.884 09:21:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:17.884 09:21:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:17.884 09:21:04 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:17.884 09:21:04 -- spdk/autobuild.sh@16 -- $ date -u 00:03:17.884 Tue Nov 19 08:21:04 AM UTC 2024 00:03:17.884 09:21:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:17.884 v25.01-pre-190-gd47eb51c9 00:03:17.884 09:21:04 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:17.884 09:21:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:17.884 09:21:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:17.884 09:21:04 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:17.885 09:21:04 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:17.885 09:21:04 -- common/autotest_common.sh@10 -- $ set +x 00:03:17.885 ************************************ 00:03:17.885 START TEST ubsan 00:03:17.885 ************************************ 00:03:17.885 09:21:04 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:17.885 using ubsan 00:03:17.885 00:03:17.885 real 0m0.001s 00:03:17.885 user 0m0.000s 00:03:17.885 sys 0m0.000s 00:03:17.885 09:21:04 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:17.885 09:21:04 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:17.885 ************************************ 00:03:17.885 END TEST ubsan 00:03:17.885 ************************************ 00:03:17.885 09:21:04 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:17.885 09:21:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:17.885 09:21:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:17.885 09:21:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:17.885 09:21:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:17.885 09:21:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:17.885 09:21:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:17.885 09:21:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:17.885 09:21:04 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:18.459 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:18.459 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:19.406 Using 'verbs' RDMA provider 00:03:35.729 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:50.653 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:50.653 Creating mk/config.mk...done. 00:03:50.653 Creating mk/cc.flags.mk...done. 00:03:50.653 Type 'make' to build. 00:03:50.653 09:21:37 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:03:50.653 09:21:37 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:50.653 09:21:37 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:50.653 09:21:37 -- common/autotest_common.sh@10 -- $ set +x 00:03:50.653 ************************************ 00:03:50.653 START TEST make 00:03:50.653 ************************************ 00:03:50.653 09:21:37 make -- common/autotest_common.sh@1129 -- $ make -j144 00:03:50.916 make[1]: Nothing to be done for 'all'. 00:03:53.467 The Meson build system 00:03:53.467 Version: 1.5.0 00:03:53.467 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:53.467 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:53.467 Build type: native build 00:03:53.467 Project name: libvfio-user 00:03:53.467 Project version: 0.0.1 00:03:53.467 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:53.467 C linker for the host machine: cc ld.bfd 2.40-14 00:03:53.467 Host machine cpu family: x86_64 00:03:53.467 Host machine cpu: x86_64 00:03:53.467 Run-time dependency threads found: YES 00:03:53.467 Library dl found: YES 00:03:53.467 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:53.467 Run-time dependency json-c found: YES 0.17 00:03:53.467 Run-time dependency cmocka found: YES 1.1.7 00:03:53.467 Program pytest-3 found: NO 00:03:53.467 Program flake8 found: NO 00:03:53.467 Program misspell-fixer found: NO 00:03:53.467 Program restructuredtext-lint found: NO 00:03:53.467 Program valgrind found: YES (/usr/bin/valgrind) 00:03:53.467 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:53.467 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:53.467 Compiler for C supports arguments -Wwrite-strings: YES 00:03:53.467 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:53.467 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:53.467 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:53.467 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:53.467 Build targets in project: 8 00:03:53.467 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:53.467 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:53.467 00:03:53.467 libvfio-user 0.0.1 00:03:53.467 00:03:53.467 User defined options 00:03:53.467 buildtype : debug 00:03:53.467 default_library: shared 00:03:53.467 libdir : /usr/local/lib 00:03:53.467 00:03:53.467 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:53.729 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:53.729 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:53.729 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:53.729 [3/37] Compiling C object samples/null.p/null.c.o 00:03:53.729 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:53.729 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:53.729 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:53.729 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:53.729 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:53.729 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:53.729 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:53.729 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:53.729 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:53.729 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:53.729 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:53.729 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:53.729 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:53.729 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:53.729 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:53.729 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:53.729 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:53.729 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:53.729 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:53.729 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:53.729 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:53.729 [25/37] Compiling C object samples/server.p/server.c.o 00:03:53.729 [26/37] Compiling C object samples/client.p/client.c.o 00:03:53.729 [27/37] Linking target samples/client 00:03:53.729 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:53.992 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:53.992 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:53.992 [31/37] Linking target test/unit_tests 00:03:53.992 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:54.254 [33/37] Linking target samples/server 00:03:54.254 [34/37] Linking target samples/null 00:03:54.254 [35/37] Linking target samples/gpio-pci-idio-16 00:03:54.254 [36/37] Linking target samples/lspci 00:03:54.254 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:54.254 INFO: autodetecting backend as ninja 00:03:54.254 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:54.254 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:54.516 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:54.516 ninja: no work to do. 00:03:59.813 The Meson build system 00:03:59.813 Version: 1.5.0 00:03:59.813 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:59.813 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:59.813 Build type: native build 00:03:59.813 Program cat found: YES (/usr/bin/cat) 00:03:59.813 Project name: DPDK 00:03:59.813 Project version: 24.03.0 00:03:59.813 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:59.813 C linker for the host machine: cc ld.bfd 2.40-14 00:03:59.813 Host machine cpu family: x86_64 00:03:59.813 Host machine cpu: x86_64 00:03:59.813 Message: ## Building in Developer Mode ## 00:03:59.813 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:59.813 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:59.813 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:59.813 Program python3 found: YES (/usr/bin/python3) 00:03:59.813 Program cat found: YES (/usr/bin/cat) 00:03:59.813 Compiler for C supports arguments -march=native: YES 00:03:59.813 Checking for size of "void *" : 8 00:03:59.813 Checking for size of "void *" : 8 (cached) 00:03:59.813 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:59.813 Library m found: YES 00:03:59.813 Library numa found: YES 00:03:59.813 Has header "numaif.h" : YES 00:03:59.813 Library fdt found: NO 00:03:59.813 Library execinfo found: NO 00:03:59.813 Has header "execinfo.h" : YES 00:03:59.813 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:59.813 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:59.813 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:59.813 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:59.813 Run-time dependency openssl found: YES 3.1.1 00:03:59.813 Run-time dependency libpcap found: YES 1.10.4 00:03:59.813 Has header "pcap.h" with dependency libpcap: YES 00:03:59.813 Compiler for C supports arguments -Wcast-qual: YES 00:03:59.813 Compiler for C supports arguments -Wdeprecated: YES 00:03:59.813 Compiler for C supports arguments -Wformat: YES 00:03:59.813 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:59.813 Compiler for C supports arguments -Wformat-security: NO 00:03:59.813 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:59.813 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:59.813 Compiler for C supports arguments -Wnested-externs: YES 00:03:59.813 Compiler for C supports arguments -Wold-style-definition: YES 00:03:59.813 Compiler for C supports arguments -Wpointer-arith: YES 00:03:59.813 Compiler for C supports arguments -Wsign-compare: YES 00:03:59.813 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:59.813 Compiler for C supports arguments -Wundef: YES 00:03:59.813 Compiler for C supports arguments -Wwrite-strings: YES 00:03:59.813 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:59.813 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:59.813 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:59.813 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:59.813 Program objdump found: YES (/usr/bin/objdump) 00:03:59.813 Compiler for C supports arguments -mavx512f: YES 00:03:59.813 Checking if "AVX512 checking" compiles: YES 00:03:59.813 Fetching value of define "__SSE4_2__" : 1 00:03:59.813 Fetching value of define "__AES__" : 1 00:03:59.813 Fetching value of define "__AVX__" : 1 00:03:59.813 Fetching value of define "__AVX2__" : 1 00:03:59.813 Fetching value of define "__AVX512BW__" : 1 00:03:59.813 Fetching value of define "__AVX512CD__" : 1 00:03:59.813 Fetching value of define "__AVX512DQ__" : 1 00:03:59.813 Fetching value of define "__AVX512F__" : 1 00:03:59.813 Fetching value of define "__AVX512VL__" : 1 00:03:59.813 Fetching value of define "__PCLMUL__" : 1 00:03:59.813 Fetching value of define "__RDRND__" : 1 00:03:59.813 Fetching value of define "__RDSEED__" : 1 00:03:59.813 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:59.813 Fetching value of define "__znver1__" : (undefined) 00:03:59.813 Fetching value of define "__znver2__" : (undefined) 00:03:59.813 Fetching value of define "__znver3__" : (undefined) 00:03:59.813 Fetching value of define "__znver4__" : (undefined) 00:03:59.813 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:59.813 Message: lib/log: Defining dependency "log" 00:03:59.813 Message: lib/kvargs: Defining dependency "kvargs" 00:03:59.813 Message: lib/telemetry: Defining dependency "telemetry" 00:03:59.813 Checking for function "getentropy" : NO 00:03:59.813 Message: lib/eal: Defining dependency "eal" 00:03:59.813 Message: lib/ring: Defining dependency "ring" 00:03:59.813 Message: lib/rcu: Defining dependency "rcu" 00:03:59.813 Message: lib/mempool: Defining dependency "mempool" 00:03:59.813 Message: lib/mbuf: Defining dependency "mbuf" 00:03:59.813 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:59.813 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:59.813 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:59.813 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:59.813 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:59.813 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:59.813 Compiler for C supports arguments -mpclmul: YES 00:03:59.813 Compiler for C supports arguments -maes: YES 00:03:59.813 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:59.813 Compiler for C supports arguments -mavx512bw: YES 00:03:59.813 Compiler for C supports arguments -mavx512dq: YES 00:03:59.813 Compiler for C supports arguments -mavx512vl: YES 00:03:59.813 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:59.813 Compiler for C supports arguments -mavx2: YES 00:03:59.813 Compiler for C supports arguments -mavx: YES 00:03:59.813 Message: lib/net: Defining dependency "net" 00:03:59.813 Message: lib/meter: Defining dependency "meter" 00:03:59.813 Message: lib/ethdev: Defining dependency "ethdev" 00:03:59.813 Message: lib/pci: Defining dependency "pci" 00:03:59.813 Message: lib/cmdline: Defining dependency "cmdline" 00:03:59.813 Message: lib/hash: Defining dependency "hash" 00:03:59.813 Message: lib/timer: Defining dependency "timer" 00:03:59.813 Message: lib/compressdev: Defining dependency "compressdev" 00:03:59.813 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:59.813 Message: lib/dmadev: Defining dependency "dmadev" 00:03:59.813 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:59.813 Message: lib/power: Defining dependency "power" 00:03:59.813 Message: lib/reorder: Defining dependency "reorder" 00:03:59.813 Message: lib/security: Defining dependency "security" 00:03:59.813 Has header "linux/userfaultfd.h" : YES 00:03:59.813 Has header "linux/vduse.h" : YES 00:03:59.813 Message: lib/vhost: Defining dependency "vhost" 00:03:59.813 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:59.813 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:59.813 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:59.813 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:59.813 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:59.813 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:59.813 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:59.813 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:59.813 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:59.813 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:59.814 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:59.814 Configuring doxy-api-html.conf using configuration 00:03:59.814 Configuring doxy-api-man.conf using configuration 00:03:59.814 Program mandb found: YES (/usr/bin/mandb) 00:03:59.814 Program sphinx-build found: NO 00:03:59.814 Configuring rte_build_config.h using configuration 00:03:59.814 Message: 00:03:59.814 ================= 00:03:59.814 Applications Enabled 00:03:59.814 ================= 00:03:59.814 00:03:59.814 apps: 00:03:59.814 00:03:59.814 00:03:59.814 Message: 00:03:59.814 ================= 00:03:59.814 Libraries Enabled 00:03:59.814 ================= 00:03:59.814 00:03:59.814 libs: 00:03:59.814 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:59.814 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:59.814 cryptodev, dmadev, power, reorder, security, vhost, 00:03:59.814 00:03:59.814 Message: 00:03:59.814 =============== 00:03:59.814 Drivers Enabled 00:03:59.814 =============== 00:03:59.814 00:03:59.814 common: 00:03:59.814 00:03:59.814 bus: 00:03:59.814 pci, vdev, 00:03:59.814 mempool: 00:03:59.814 ring, 00:03:59.814 dma: 00:03:59.814 00:03:59.814 net: 00:03:59.814 00:03:59.814 crypto: 00:03:59.814 00:03:59.814 compress: 00:03:59.814 00:03:59.814 vdpa: 00:03:59.814 00:03:59.814 00:03:59.814 Message: 00:03:59.814 ================= 00:03:59.814 Content Skipped 00:03:59.814 ================= 00:03:59.814 00:03:59.814 apps: 00:03:59.814 dumpcap: explicitly disabled via build config 00:03:59.814 graph: explicitly disabled via build config 00:03:59.814 pdump: explicitly disabled via build config 00:03:59.814 proc-info: explicitly disabled via build config 00:03:59.814 test-acl: explicitly disabled via build config 00:03:59.814 test-bbdev: explicitly disabled via build config 00:03:59.814 test-cmdline: explicitly disabled via build config 00:03:59.814 test-compress-perf: explicitly disabled via build config 00:03:59.814 test-crypto-perf: explicitly disabled via build config 00:03:59.814 test-dma-perf: explicitly disabled via build config 00:03:59.814 test-eventdev: explicitly disabled via build config 00:03:59.814 test-fib: explicitly disabled via build config 00:03:59.814 test-flow-perf: explicitly disabled via build config 00:03:59.814 test-gpudev: explicitly disabled via build config 00:03:59.814 test-mldev: explicitly disabled via build config 00:03:59.814 test-pipeline: explicitly disabled via build config 00:03:59.814 test-pmd: explicitly disabled via build config 00:03:59.814 test-regex: explicitly disabled via build config 00:03:59.814 test-sad: explicitly disabled via build config 00:03:59.814 test-security-perf: explicitly disabled via build config 00:03:59.814 00:03:59.814 libs: 00:03:59.814 argparse: explicitly disabled via build config 00:03:59.814 metrics: explicitly disabled via build config 00:03:59.814 acl: explicitly disabled via build config 00:03:59.814 bbdev: explicitly disabled via build config 00:03:59.814 bitratestats: explicitly disabled via build config 00:03:59.814 bpf: explicitly disabled via build config 00:03:59.814 cfgfile: explicitly disabled via build config 00:03:59.814 distributor: explicitly disabled via build config 00:03:59.814 efd: explicitly disabled via build config 00:03:59.814 eventdev: explicitly disabled via build config 00:03:59.814 dispatcher: explicitly disabled via build config 00:03:59.814 gpudev: explicitly disabled via build config 00:03:59.814 gro: explicitly disabled via build config 00:03:59.814 gso: explicitly disabled via build config 00:03:59.814 ip_frag: explicitly disabled via build config 00:03:59.814 jobstats: explicitly disabled via build config 00:03:59.814 latencystats: explicitly disabled via build config 00:03:59.814 lpm: explicitly disabled via build config 00:03:59.814 member: explicitly disabled via build config 00:03:59.814 pcapng: explicitly disabled via build config 00:03:59.814 rawdev: explicitly disabled via build config 00:03:59.814 regexdev: explicitly disabled via build config 00:03:59.814 mldev: explicitly disabled via build config 00:03:59.814 rib: explicitly disabled via build config 00:03:59.814 sched: explicitly disabled via build config 00:03:59.814 stack: explicitly disabled via build config 00:03:59.814 ipsec: explicitly disabled via build config 00:03:59.814 pdcp: explicitly disabled via build config 00:03:59.814 fib: explicitly disabled via build config 00:03:59.814 port: explicitly disabled via build config 00:03:59.814 pdump: explicitly disabled via build config 00:03:59.814 table: explicitly disabled via build config 00:03:59.814 pipeline: explicitly disabled via build config 00:03:59.814 graph: explicitly disabled via build config 00:03:59.814 node: explicitly disabled via build config 00:03:59.814 00:03:59.814 drivers: 00:03:59.814 common/cpt: not in enabled drivers build config 00:03:59.814 common/dpaax: not in enabled drivers build config 00:03:59.814 common/iavf: not in enabled drivers build config 00:03:59.814 common/idpf: not in enabled drivers build config 00:03:59.814 common/ionic: not in enabled drivers build config 00:03:59.814 common/mvep: not in enabled drivers build config 00:03:59.814 common/octeontx: not in enabled drivers build config 00:03:59.814 bus/auxiliary: not in enabled drivers build config 00:03:59.814 bus/cdx: not in enabled drivers build config 00:03:59.814 bus/dpaa: not in enabled drivers build config 00:03:59.814 bus/fslmc: not in enabled drivers build config 00:03:59.814 bus/ifpga: not in enabled drivers build config 00:03:59.814 bus/platform: not in enabled drivers build config 00:03:59.814 bus/uacce: not in enabled drivers build config 00:03:59.814 bus/vmbus: not in enabled drivers build config 00:03:59.814 common/cnxk: not in enabled drivers build config 00:03:59.814 common/mlx5: not in enabled drivers build config 00:03:59.814 common/nfp: not in enabled drivers build config 00:03:59.814 common/nitrox: not in enabled drivers build config 00:03:59.814 common/qat: not in enabled drivers build config 00:03:59.814 common/sfc_efx: not in enabled drivers build config 00:03:59.814 mempool/bucket: not in enabled drivers build config 00:03:59.814 mempool/cnxk: not in enabled drivers build config 00:03:59.814 mempool/dpaa: not in enabled drivers build config 00:03:59.814 mempool/dpaa2: not in enabled drivers build config 00:03:59.814 mempool/octeontx: not in enabled drivers build config 00:03:59.814 mempool/stack: not in enabled drivers build config 00:03:59.814 dma/cnxk: not in enabled drivers build config 00:03:59.814 dma/dpaa: not in enabled drivers build config 00:03:59.814 dma/dpaa2: not in enabled drivers build config 00:03:59.814 dma/hisilicon: not in enabled drivers build config 00:03:59.814 dma/idxd: not in enabled drivers build config 00:03:59.814 dma/ioat: not in enabled drivers build config 00:03:59.814 dma/skeleton: not in enabled drivers build config 00:03:59.814 net/af_packet: not in enabled drivers build config 00:03:59.814 net/af_xdp: not in enabled drivers build config 00:03:59.814 net/ark: not in enabled drivers build config 00:03:59.814 net/atlantic: not in enabled drivers build config 00:03:59.814 net/avp: not in enabled drivers build config 00:03:59.814 net/axgbe: not in enabled drivers build config 00:03:59.814 net/bnx2x: not in enabled drivers build config 00:03:59.814 net/bnxt: not in enabled drivers build config 00:03:59.814 net/bonding: not in enabled drivers build config 00:03:59.814 net/cnxk: not in enabled drivers build config 00:03:59.814 net/cpfl: not in enabled drivers build config 00:03:59.814 net/cxgbe: not in enabled drivers build config 00:03:59.814 net/dpaa: not in enabled drivers build config 00:03:59.814 net/dpaa2: not in enabled drivers build config 00:03:59.814 net/e1000: not in enabled drivers build config 00:03:59.814 net/ena: not in enabled drivers build config 00:03:59.814 net/enetc: not in enabled drivers build config 00:03:59.814 net/enetfec: not in enabled drivers build config 00:03:59.814 net/enic: not in enabled drivers build config 00:03:59.814 net/failsafe: not in enabled drivers build config 00:03:59.814 net/fm10k: not in enabled drivers build config 00:03:59.814 net/gve: not in enabled drivers build config 00:03:59.814 net/hinic: not in enabled drivers build config 00:03:59.814 net/hns3: not in enabled drivers build config 00:03:59.814 net/i40e: not in enabled drivers build config 00:03:59.814 net/iavf: not in enabled drivers build config 00:03:59.814 net/ice: not in enabled drivers build config 00:03:59.814 net/idpf: not in enabled drivers build config 00:03:59.814 net/igc: not in enabled drivers build config 00:03:59.814 net/ionic: not in enabled drivers build config 00:03:59.814 net/ipn3ke: not in enabled drivers build config 00:03:59.814 net/ixgbe: not in enabled drivers build config 00:03:59.814 net/mana: not in enabled drivers build config 00:03:59.814 net/memif: not in enabled drivers build config 00:03:59.814 net/mlx4: not in enabled drivers build config 00:03:59.814 net/mlx5: not in enabled drivers build config 00:03:59.814 net/mvneta: not in enabled drivers build config 00:03:59.814 net/mvpp2: not in enabled drivers build config 00:03:59.814 net/netvsc: not in enabled drivers build config 00:03:59.814 net/nfb: not in enabled drivers build config 00:03:59.814 net/nfp: not in enabled drivers build config 00:03:59.814 net/ngbe: not in enabled drivers build config 00:03:59.815 net/null: not in enabled drivers build config 00:03:59.815 net/octeontx: not in enabled drivers build config 00:03:59.815 net/octeon_ep: not in enabled drivers build config 00:03:59.815 net/pcap: not in enabled drivers build config 00:03:59.815 net/pfe: not in enabled drivers build config 00:03:59.815 net/qede: not in enabled drivers build config 00:03:59.815 net/ring: not in enabled drivers build config 00:03:59.815 net/sfc: not in enabled drivers build config 00:03:59.815 net/softnic: not in enabled drivers build config 00:03:59.815 net/tap: not in enabled drivers build config 00:03:59.815 net/thunderx: not in enabled drivers build config 00:03:59.815 net/txgbe: not in enabled drivers build config 00:03:59.815 net/vdev_netvsc: not in enabled drivers build config 00:03:59.815 net/vhost: not in enabled drivers build config 00:03:59.815 net/virtio: not in enabled drivers build config 00:03:59.815 net/vmxnet3: not in enabled drivers build config 00:03:59.815 raw/*: missing internal dependency, "rawdev" 00:03:59.815 crypto/armv8: not in enabled drivers build config 00:03:59.815 crypto/bcmfs: not in enabled drivers build config 00:03:59.815 crypto/caam_jr: not in enabled drivers build config 00:03:59.815 crypto/ccp: not in enabled drivers build config 00:03:59.815 crypto/cnxk: not in enabled drivers build config 00:03:59.815 crypto/dpaa_sec: not in enabled drivers build config 00:03:59.815 crypto/dpaa2_sec: not in enabled drivers build config 00:03:59.815 crypto/ipsec_mb: not in enabled drivers build config 00:03:59.815 crypto/mlx5: not in enabled drivers build config 00:03:59.815 crypto/mvsam: not in enabled drivers build config 00:03:59.815 crypto/nitrox: not in enabled drivers build config 00:03:59.815 crypto/null: not in enabled drivers build config 00:03:59.815 crypto/octeontx: not in enabled drivers build config 00:03:59.815 crypto/openssl: not in enabled drivers build config 00:03:59.815 crypto/scheduler: not in enabled drivers build config 00:03:59.815 crypto/uadk: not in enabled drivers build config 00:03:59.815 crypto/virtio: not in enabled drivers build config 00:03:59.815 compress/isal: not in enabled drivers build config 00:03:59.815 compress/mlx5: not in enabled drivers build config 00:03:59.815 compress/nitrox: not in enabled drivers build config 00:03:59.815 compress/octeontx: not in enabled drivers build config 00:03:59.815 compress/zlib: not in enabled drivers build config 00:03:59.815 regex/*: missing internal dependency, "regexdev" 00:03:59.815 ml/*: missing internal dependency, "mldev" 00:03:59.815 vdpa/ifc: not in enabled drivers build config 00:03:59.815 vdpa/mlx5: not in enabled drivers build config 00:03:59.815 vdpa/nfp: not in enabled drivers build config 00:03:59.815 vdpa/sfc: not in enabled drivers build config 00:03:59.815 event/*: missing internal dependency, "eventdev" 00:03:59.815 baseband/*: missing internal dependency, "bbdev" 00:03:59.815 gpu/*: missing internal dependency, "gpudev" 00:03:59.815 00:03:59.815 00:03:59.815 Build targets in project: 84 00:03:59.815 00:03:59.815 DPDK 24.03.0 00:03:59.815 00:03:59.815 User defined options 00:03:59.815 buildtype : debug 00:03:59.815 default_library : shared 00:03:59.815 libdir : lib 00:03:59.815 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:59.815 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:59.815 c_link_args : 00:03:59.815 cpu_instruction_set: native 00:03:59.815 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:03:59.815 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:03:59.815 enable_docs : false 00:03:59.815 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:59.815 enable_kmods : false 00:03:59.815 max_lcores : 128 00:03:59.815 tests : false 00:03:59.815 00:03:59.815 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:00.081 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:00.351 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:00.351 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:00.351 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:00.351 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:00.351 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:00.351 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:00.351 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:00.351 [8/267] Linking static target lib/librte_kvargs.a 00:04:00.351 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:00.351 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:00.351 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:00.351 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:00.351 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:00.351 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:00.351 [15/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:00.351 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:00.351 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:00.351 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:00.351 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:00.351 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:00.351 [21/267] Linking static target lib/librte_log.a 00:04:00.351 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:00.351 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:00.351 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:00.351 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:00.351 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:00.351 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:00.610 [28/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:00.610 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:00.610 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:00.610 [31/267] Linking static target lib/librte_pci.a 00:04:00.610 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:00.610 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:00.610 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:00.610 [35/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:00.610 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:00.610 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:00.610 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:00.610 [39/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:00.610 [40/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.610 [41/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:00.610 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:00.868 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:00.868 [44/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.868 [45/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:00.868 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:00.868 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:00.868 [48/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:00.868 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:00.868 [50/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:00.868 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:00.868 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:00.868 [53/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:00.868 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:00.868 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:00.869 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:00.869 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:00.869 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:00.869 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:00.869 [60/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:00.869 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:00.869 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:00.869 [63/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:00.869 [64/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:00.869 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:00.869 [66/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:00.869 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:00.869 [68/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:00.869 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:00.869 [70/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:00.869 [71/267] Linking static target lib/librte_telemetry.a 00:04:00.869 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:00.869 [73/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:00.869 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:00.869 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:00.869 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:00.869 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:00.869 [78/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:00.869 [79/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:00.869 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:00.869 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:00.869 [82/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:00.869 [83/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:00.869 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:00.869 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:00.869 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:00.869 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:00.869 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:00.869 [89/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:00.869 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:00.869 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:00.869 [92/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:00.869 [93/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:00.869 [94/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:00.869 [95/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:00.869 [96/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:00.869 [97/267] Linking static target lib/librte_meter.a 00:04:00.869 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:00.869 [99/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:00.869 [100/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:00.869 [101/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:00.869 [102/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:00.869 [103/267] Linking static target lib/librte_ring.a 00:04:00.869 [104/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:00.869 [105/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:00.869 [106/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:00.869 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:04:00.869 [108/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:00.869 [109/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:00.869 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:00.869 [111/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:00.869 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:00.869 [113/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:00.869 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:00.869 [115/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:00.869 [116/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:00.869 [117/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:00.869 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:00.869 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:00.869 [120/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:00.869 [121/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:00.869 [122/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:00.869 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:00.869 [124/267] Linking static target lib/librte_timer.a 00:04:00.869 [125/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:00.869 [126/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:00.869 [127/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:00.869 [128/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.869 [129/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:00.869 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:00.869 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:00.869 [132/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:00.869 [133/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:00.869 [134/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:00.869 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:00.869 [136/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:00.869 [137/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:00.869 [138/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:00.869 [139/267] Linking static target lib/librte_cmdline.a 00:04:00.869 [140/267] Linking static target lib/librte_mempool.a 00:04:00.869 [141/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:00.869 [142/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:00.869 [143/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:00.869 [144/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:00.869 [145/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:00.869 [146/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:00.869 [147/267] Linking target lib/librte_log.so.24.1 00:04:00.869 [148/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:00.869 [149/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:00.869 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:01.130 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:01.130 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:01.130 [153/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:01.130 [154/267] Linking static target lib/librte_rcu.a 00:04:01.130 [155/267] Linking static target lib/librte_dmadev.a 00:04:01.130 [156/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:01.130 [157/267] Linking static target lib/librte_compressdev.a 00:04:01.130 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:01.130 [159/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:01.130 [160/267] Linking static target lib/librte_power.a 00:04:01.130 [161/267] Linking static target lib/librte_net.a 00:04:01.130 [162/267] Linking static target lib/librte_security.a 00:04:01.130 [163/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:01.130 [164/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:01.130 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:01.130 [166/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:01.130 [167/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:01.130 [168/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:01.130 [169/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:01.130 [170/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:01.130 [171/267] Linking static target lib/librte_mbuf.a 00:04:01.130 [172/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:01.130 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:01.130 [174/267] Linking static target lib/librte_eal.a 00:04:01.130 [175/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:01.130 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:01.130 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:01.130 [178/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:01.130 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:01.130 [180/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:01.130 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:01.131 [182/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:01.131 [183/267] Linking static target lib/librte_reorder.a 00:04:01.131 [184/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:01.131 [185/267] Linking target lib/librte_kvargs.so.24.1 00:04:01.131 [186/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.131 [187/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:01.131 [188/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:01.131 [189/267] Linking static target drivers/librte_bus_vdev.a 00:04:01.131 [190/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:01.131 [191/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:01.131 [192/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:01.131 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:01.131 [194/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:01.131 [195/267] Linking static target lib/librte_hash.a 00:04:01.131 [196/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.131 [197/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:01.392 [198/267] Linking static target drivers/librte_mempool_ring.a 00:04:01.392 [199/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:01.392 [200/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:01.392 [201/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:01.392 [202/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:01.392 [203/267] Linking static target drivers/librte_bus_pci.a 00:04:01.392 [204/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:01.392 [205/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.392 [206/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.392 [207/267] Linking static target lib/librte_cryptodev.a 00:04:01.392 [208/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.392 [209/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.392 [210/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:01.392 [211/267] Linking target lib/librte_telemetry.so.24.1 00:04:01.662 [212/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.662 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:01.662 [214/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.662 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.662 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.924 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:01.924 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.924 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:01.924 [220/267] Linking static target lib/librte_ethdev.a 00:04:01.924 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.924 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.184 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.184 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.184 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.444 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.444 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:02.444 [228/267] Linking static target lib/librte_vhost.a 00:04:03.831 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.776 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.369 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.755 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.755 [233/267] Linking target lib/librte_eal.so.24.1 00:04:12.755 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:12.755 [235/267] Linking target lib/librte_ring.so.24.1 00:04:12.755 [236/267] Linking target lib/librte_meter.so.24.1 00:04:12.755 [237/267] Linking target lib/librte_pci.so.24.1 00:04:12.755 [238/267] Linking target lib/librte_dmadev.so.24.1 00:04:12.755 [239/267] Linking target lib/librte_timer.so.24.1 00:04:12.755 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:04:12.755 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:13.015 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:13.015 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:13.015 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:13.015 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:13.015 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:04:13.015 [247/267] Linking target lib/librte_rcu.so.24.1 00:04:13.015 [248/267] Linking target lib/librte_mempool.so.24.1 00:04:13.015 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:13.015 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:13.015 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:04:13.015 [252/267] Linking target lib/librte_mbuf.so.24.1 00:04:13.276 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:13.276 [254/267] Linking target lib/librte_compressdev.so.24.1 00:04:13.276 [255/267] Linking target lib/librte_reorder.so.24.1 00:04:13.276 [256/267] Linking target lib/librte_net.so.24.1 00:04:13.276 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:04:13.538 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:13.538 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:13.538 [260/267] Linking target lib/librte_hash.so.24.1 00:04:13.538 [261/267] Linking target lib/librte_cmdline.so.24.1 00:04:13.538 [262/267] Linking target lib/librte_ethdev.so.24.1 00:04:13.538 [263/267] Linking target lib/librte_security.so.24.1 00:04:13.538 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:13.538 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:13.799 [266/267] Linking target lib/librte_power.so.24.1 00:04:13.799 [267/267] Linking target lib/librte_vhost.so.24.1 00:04:13.799 INFO: autodetecting backend as ninja 00:04:13.799 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:04:18.010 CC lib/log/log.o 00:04:18.010 CC lib/log/log_flags.o 00:04:18.010 CC lib/log/log_deprecated.o 00:04:18.010 CC lib/ut/ut.o 00:04:18.010 CC lib/ut_mock/mock.o 00:04:18.010 LIB libspdk_ut.a 00:04:18.010 LIB libspdk_ut_mock.a 00:04:18.010 LIB libspdk_log.a 00:04:18.010 SO libspdk_ut.so.2.0 00:04:18.010 SO libspdk_ut_mock.so.6.0 00:04:18.010 SO libspdk_log.so.7.1 00:04:18.271 SYMLINK libspdk_ut_mock.so 00:04:18.271 SYMLINK libspdk_ut.so 00:04:18.271 SYMLINK libspdk_log.so 00:04:18.533 CC lib/util/base64.o 00:04:18.533 CC lib/util/bit_array.o 00:04:18.533 CC lib/util/cpuset.o 00:04:18.533 CC lib/util/crc16.o 00:04:18.533 CC lib/util/crc32.o 00:04:18.533 CC lib/util/crc32c.o 00:04:18.533 CC lib/util/crc32_ieee.o 00:04:18.533 CC lib/util/crc64.o 00:04:18.533 CC lib/util/fd.o 00:04:18.533 CC lib/util/dif.o 00:04:18.533 CC lib/util/fd_group.o 00:04:18.533 CC lib/util/file.o 00:04:18.533 CC lib/util/hexlify.o 00:04:18.533 CC lib/util/iov.o 00:04:18.533 CXX lib/trace_parser/trace.o 00:04:18.533 CC lib/util/math.o 00:04:18.533 CC lib/ioat/ioat.o 00:04:18.533 CC lib/util/net.o 00:04:18.533 CC lib/dma/dma.o 00:04:18.533 CC lib/util/pipe.o 00:04:18.533 CC lib/util/strerror_tls.o 00:04:18.533 CC lib/util/string.o 00:04:18.533 CC lib/util/uuid.o 00:04:18.533 CC lib/util/xor.o 00:04:18.533 CC lib/util/zipf.o 00:04:18.533 CC lib/util/md5.o 00:04:18.796 CC lib/vfio_user/host/vfio_user_pci.o 00:04:18.796 CC lib/vfio_user/host/vfio_user.o 00:04:18.796 LIB libspdk_dma.a 00:04:18.796 SO libspdk_dma.so.5.0 00:04:18.796 LIB libspdk_ioat.a 00:04:18.796 SYMLINK libspdk_dma.so 00:04:18.796 SO libspdk_ioat.so.7.0 00:04:19.101 SYMLINK libspdk_ioat.so 00:04:19.101 LIB libspdk_vfio_user.a 00:04:19.101 SO libspdk_vfio_user.so.5.0 00:04:19.101 LIB libspdk_util.a 00:04:19.101 SYMLINK libspdk_vfio_user.so 00:04:19.101 SO libspdk_util.so.10.1 00:04:19.362 SYMLINK libspdk_util.so 00:04:19.623 CC lib/conf/conf.o 00:04:19.623 CC lib/idxd/idxd.o 00:04:19.623 CC lib/idxd/idxd_user.o 00:04:19.623 CC lib/idxd/idxd_kernel.o 00:04:19.623 CC lib/vmd/vmd.o 00:04:19.623 CC lib/vmd/led.o 00:04:19.623 CC lib/json/json_parse.o 00:04:19.623 CC lib/env_dpdk/env.o 00:04:19.623 CC lib/json/json_util.o 00:04:19.623 CC lib/env_dpdk/memory.o 00:04:19.623 CC lib/json/json_write.o 00:04:19.623 CC lib/env_dpdk/pci.o 00:04:19.623 CC lib/rdma_utils/rdma_utils.o 00:04:19.623 CC lib/env_dpdk/init.o 00:04:19.623 CC lib/env_dpdk/threads.o 00:04:19.623 CC lib/env_dpdk/pci_ioat.o 00:04:19.623 CC lib/env_dpdk/pci_virtio.o 00:04:19.623 CC lib/env_dpdk/pci_vmd.o 00:04:19.623 CC lib/env_dpdk/pci_idxd.o 00:04:19.623 CC lib/env_dpdk/pci_event.o 00:04:19.623 CC lib/env_dpdk/sigbus_handler.o 00:04:19.623 CC lib/env_dpdk/pci_dpdk.o 00:04:19.623 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:19.623 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:19.885 LIB libspdk_conf.a 00:04:19.885 SO libspdk_conf.so.6.0 00:04:19.885 LIB libspdk_rdma_utils.a 00:04:19.885 LIB libspdk_json.a 00:04:19.885 SYMLINK libspdk_conf.so 00:04:20.148 SO libspdk_rdma_utils.so.1.0 00:04:20.148 SO libspdk_json.so.6.0 00:04:20.148 LIB libspdk_trace_parser.a 00:04:20.148 SYMLINK libspdk_rdma_utils.so 00:04:20.148 SYMLINK libspdk_json.so 00:04:20.148 SO libspdk_trace_parser.so.6.0 00:04:20.148 SYMLINK libspdk_trace_parser.so 00:04:20.148 LIB libspdk_idxd.a 00:04:20.148 SO libspdk_idxd.so.12.1 00:04:20.410 LIB libspdk_vmd.a 00:04:20.410 SO libspdk_vmd.so.6.0 00:04:20.410 SYMLINK libspdk_idxd.so 00:04:20.410 SYMLINK libspdk_vmd.so 00:04:20.410 CC lib/rdma_provider/common.o 00:04:20.410 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:20.410 CC lib/jsonrpc/jsonrpc_server.o 00:04:20.410 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:20.410 CC lib/jsonrpc/jsonrpc_client.o 00:04:20.410 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:20.671 LIB libspdk_rdma_provider.a 00:04:20.671 SO libspdk_rdma_provider.so.7.0 00:04:20.671 LIB libspdk_jsonrpc.a 00:04:20.671 SO libspdk_jsonrpc.so.6.0 00:04:20.671 SYMLINK libspdk_rdma_provider.so 00:04:20.933 SYMLINK libspdk_jsonrpc.so 00:04:20.933 LIB libspdk_env_dpdk.a 00:04:20.933 SO libspdk_env_dpdk.so.15.1 00:04:21.194 SYMLINK libspdk_env_dpdk.so 00:04:21.194 CC lib/rpc/rpc.o 00:04:21.457 LIB libspdk_rpc.a 00:04:21.457 SO libspdk_rpc.so.6.0 00:04:21.457 SYMLINK libspdk_rpc.so 00:04:22.031 CC lib/trace/trace.o 00:04:22.031 CC lib/trace/trace_flags.o 00:04:22.031 CC lib/trace/trace_rpc.o 00:04:22.031 CC lib/notify/notify.o 00:04:22.031 CC lib/keyring/keyring.o 00:04:22.031 CC lib/notify/notify_rpc.o 00:04:22.031 CC lib/keyring/keyring_rpc.o 00:04:22.031 LIB libspdk_notify.a 00:04:22.031 SO libspdk_notify.so.6.0 00:04:22.031 LIB libspdk_keyring.a 00:04:22.031 LIB libspdk_trace.a 00:04:22.293 SO libspdk_keyring.so.2.0 00:04:22.293 SO libspdk_trace.so.11.0 00:04:22.293 SYMLINK libspdk_notify.so 00:04:22.293 SYMLINK libspdk_keyring.so 00:04:22.293 SYMLINK libspdk_trace.so 00:04:22.554 CC lib/thread/thread.o 00:04:22.554 CC lib/sock/sock.o 00:04:22.554 CC lib/thread/iobuf.o 00:04:22.554 CC lib/sock/sock_rpc.o 00:04:23.129 LIB libspdk_sock.a 00:04:23.129 SO libspdk_sock.so.10.0 00:04:23.129 SYMLINK libspdk_sock.so 00:04:23.390 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:23.390 CC lib/nvme/nvme_ctrlr.o 00:04:23.390 CC lib/nvme/nvme_fabric.o 00:04:23.390 CC lib/nvme/nvme_ns_cmd.o 00:04:23.390 CC lib/nvme/nvme_ns.o 00:04:23.390 CC lib/nvme/nvme_pcie_common.o 00:04:23.390 CC lib/nvme/nvme_pcie.o 00:04:23.390 CC lib/nvme/nvme_qpair.o 00:04:23.390 CC lib/nvme/nvme.o 00:04:23.390 CC lib/nvme/nvme_quirks.o 00:04:23.390 CC lib/nvme/nvme_transport.o 00:04:23.390 CC lib/nvme/nvme_discovery.o 00:04:23.390 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:23.390 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:23.390 CC lib/nvme/nvme_tcp.o 00:04:23.390 CC lib/nvme/nvme_opal.o 00:04:23.390 CC lib/nvme/nvme_io_msg.o 00:04:23.390 CC lib/nvme/nvme_poll_group.o 00:04:23.390 CC lib/nvme/nvme_zns.o 00:04:23.390 CC lib/nvme/nvme_stubs.o 00:04:23.390 CC lib/nvme/nvme_auth.o 00:04:23.390 CC lib/nvme/nvme_cuse.o 00:04:23.390 CC lib/nvme/nvme_vfio_user.o 00:04:23.390 CC lib/nvme/nvme_rdma.o 00:04:23.964 LIB libspdk_thread.a 00:04:23.964 SO libspdk_thread.so.11.0 00:04:23.964 SYMLINK libspdk_thread.so 00:04:24.538 CC lib/accel/accel.o 00:04:24.538 CC lib/accel/accel_rpc.o 00:04:24.538 CC lib/accel/accel_sw.o 00:04:24.538 CC lib/init/json_config.o 00:04:24.538 CC lib/init/subsystem.o 00:04:24.538 CC lib/init/subsystem_rpc.o 00:04:24.538 CC lib/init/rpc.o 00:04:24.538 CC lib/blob/blobstore.o 00:04:24.538 CC lib/blob/request.o 00:04:24.538 CC lib/blob/zeroes.o 00:04:24.538 CC lib/blob/blob_bs_dev.o 00:04:24.538 CC lib/virtio/virtio.o 00:04:24.538 CC lib/virtio/virtio_vhost_user.o 00:04:24.538 CC lib/virtio/virtio_vfio_user.o 00:04:24.538 CC lib/virtio/virtio_pci.o 00:04:24.538 CC lib/vfu_tgt/tgt_endpoint.o 00:04:24.538 CC lib/vfu_tgt/tgt_rpc.o 00:04:24.538 CC lib/fsdev/fsdev.o 00:04:24.538 CC lib/fsdev/fsdev_io.o 00:04:24.538 CC lib/fsdev/fsdev_rpc.o 00:04:24.799 LIB libspdk_init.a 00:04:24.799 SO libspdk_init.so.6.0 00:04:24.799 LIB libspdk_virtio.a 00:04:24.799 LIB libspdk_vfu_tgt.a 00:04:24.800 SYMLINK libspdk_init.so 00:04:24.800 SO libspdk_virtio.so.7.0 00:04:24.800 SO libspdk_vfu_tgt.so.3.0 00:04:24.800 SYMLINK libspdk_vfu_tgt.so 00:04:24.800 SYMLINK libspdk_virtio.so 00:04:25.062 LIB libspdk_fsdev.a 00:04:25.062 SO libspdk_fsdev.so.2.0 00:04:25.062 CC lib/event/app.o 00:04:25.062 CC lib/event/reactor.o 00:04:25.062 CC lib/event/log_rpc.o 00:04:25.062 CC lib/event/app_rpc.o 00:04:25.062 CC lib/event/scheduler_static.o 00:04:25.323 SYMLINK libspdk_fsdev.so 00:04:25.323 LIB libspdk_accel.a 00:04:25.323 SO libspdk_accel.so.16.0 00:04:25.585 LIB libspdk_nvme.a 00:04:25.585 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:25.585 SYMLINK libspdk_accel.so 00:04:25.585 LIB libspdk_event.a 00:04:25.585 SO libspdk_nvme.so.15.0 00:04:25.585 SO libspdk_event.so.14.0 00:04:25.585 SYMLINK libspdk_event.so 00:04:25.847 SYMLINK libspdk_nvme.so 00:04:25.847 CC lib/bdev/bdev.o 00:04:25.847 CC lib/bdev/bdev_rpc.o 00:04:25.847 CC lib/bdev/bdev_zone.o 00:04:25.847 CC lib/bdev/part.o 00:04:25.847 CC lib/bdev/scsi_nvme.o 00:04:26.109 LIB libspdk_fuse_dispatcher.a 00:04:26.109 SO libspdk_fuse_dispatcher.so.1.0 00:04:26.371 SYMLINK libspdk_fuse_dispatcher.so 00:04:26.944 LIB libspdk_blob.a 00:04:27.205 SO libspdk_blob.so.11.0 00:04:27.205 SYMLINK libspdk_blob.so 00:04:27.468 CC lib/lvol/lvol.o 00:04:27.468 CC lib/blobfs/blobfs.o 00:04:27.468 CC lib/blobfs/tree.o 00:04:28.439 LIB libspdk_bdev.a 00:04:28.439 SO libspdk_bdev.so.17.0 00:04:28.439 LIB libspdk_blobfs.a 00:04:28.439 SYMLINK libspdk_bdev.so 00:04:28.439 SO libspdk_blobfs.so.10.0 00:04:28.439 LIB libspdk_lvol.a 00:04:28.439 SYMLINK libspdk_blobfs.so 00:04:28.439 SO libspdk_lvol.so.10.0 00:04:28.439 SYMLINK libspdk_lvol.so 00:04:28.704 CC lib/nbd/nbd.o 00:04:28.704 CC lib/nbd/nbd_rpc.o 00:04:28.704 CC lib/nvmf/ctrlr.o 00:04:28.704 CC lib/scsi/dev.o 00:04:28.704 CC lib/nvmf/ctrlr_discovery.o 00:04:28.704 CC lib/scsi/lun.o 00:04:28.704 CC lib/nvmf/ctrlr_bdev.o 00:04:28.704 CC lib/scsi/scsi.o 00:04:28.704 CC lib/scsi/port.o 00:04:28.704 CC lib/nvmf/subsystem.o 00:04:28.704 CC lib/ftl/ftl_core.o 00:04:28.704 CC lib/ftl/ftl_init.o 00:04:28.704 CC lib/nvmf/nvmf.o 00:04:28.704 CC lib/scsi/scsi_bdev.o 00:04:28.704 CC lib/nvmf/nvmf_rpc.o 00:04:28.704 CC lib/ftl/ftl_layout.o 00:04:28.704 CC lib/scsi/scsi_pr.o 00:04:28.704 CC lib/scsi/scsi_rpc.o 00:04:28.704 CC lib/ftl/ftl_debug.o 00:04:28.704 CC lib/nvmf/transport.o 00:04:28.704 CC lib/ublk/ublk.o 00:04:28.704 CC lib/scsi/task.o 00:04:28.704 CC lib/ftl/ftl_io.o 00:04:28.704 CC lib/ublk/ublk_rpc.o 00:04:28.704 CC lib/nvmf/tcp.o 00:04:28.704 CC lib/nvmf/stubs.o 00:04:28.704 CC lib/ftl/ftl_sb.o 00:04:28.704 CC lib/nvmf/mdns_server.o 00:04:28.704 CC lib/ftl/ftl_l2p.o 00:04:28.704 CC lib/ftl/ftl_l2p_flat.o 00:04:28.704 CC lib/nvmf/vfio_user.o 00:04:28.704 CC lib/nvmf/rdma.o 00:04:28.704 CC lib/ftl/ftl_nv_cache.o 00:04:28.704 CC lib/nvmf/auth.o 00:04:28.704 CC lib/ftl/ftl_band.o 00:04:28.704 CC lib/ftl/ftl_band_ops.o 00:04:28.704 CC lib/ftl/ftl_writer.o 00:04:28.704 CC lib/ftl/ftl_rq.o 00:04:28.704 CC lib/ftl/ftl_reloc.o 00:04:28.704 CC lib/ftl/ftl_l2p_cache.o 00:04:28.704 CC lib/ftl/ftl_p2l.o 00:04:28.704 CC lib/ftl/ftl_p2l_log.o 00:04:28.704 CC lib/ftl/mngt/ftl_mngt.o 00:04:28.704 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:28.704 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:28.704 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:28.704 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:28.704 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:28.704 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:28.704 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:28.704 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:28.704 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:28.704 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:28.704 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:28.704 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:28.704 CC lib/ftl/utils/ftl_conf.o 00:04:28.704 CC lib/ftl/utils/ftl_bitmap.o 00:04:28.704 CC lib/ftl/utils/ftl_md.o 00:04:28.704 CC lib/ftl/utils/ftl_mempool.o 00:04:28.704 CC lib/ftl/utils/ftl_property.o 00:04:28.704 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:28.704 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:28.704 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:28.704 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:28.704 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:28.704 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:28.704 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:28.704 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:28.704 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:28.704 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:28.704 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:28.704 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:28.704 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:28.704 CC lib/ftl/base/ftl_base_dev.o 00:04:28.704 CC lib/ftl/base/ftl_base_bdev.o 00:04:28.704 CC lib/ftl/ftl_trace.o 00:04:29.274 LIB libspdk_nbd.a 00:04:29.274 SO libspdk_nbd.so.7.0 00:04:29.274 SYMLINK libspdk_nbd.so 00:04:29.535 LIB libspdk_scsi.a 00:04:29.535 SO libspdk_scsi.so.9.0 00:04:29.535 SYMLINK libspdk_scsi.so 00:04:29.535 LIB libspdk_ublk.a 00:04:29.795 SO libspdk_ublk.so.3.0 00:04:29.795 SYMLINK libspdk_ublk.so 00:04:29.795 LIB libspdk_ftl.a 00:04:30.056 CC lib/iscsi/init_grp.o 00:04:30.056 CC lib/iscsi/conn.o 00:04:30.056 CC lib/iscsi/iscsi.o 00:04:30.056 CC lib/iscsi/param.o 00:04:30.056 CC lib/iscsi/portal_grp.o 00:04:30.056 CC lib/iscsi/tgt_node.o 00:04:30.056 CC lib/iscsi/iscsi_subsystem.o 00:04:30.056 CC lib/vhost/vhost.o 00:04:30.056 CC lib/vhost/vhost_rpc.o 00:04:30.056 CC lib/iscsi/iscsi_rpc.o 00:04:30.056 CC lib/vhost/vhost_scsi.o 00:04:30.056 CC lib/iscsi/task.o 00:04:30.056 CC lib/vhost/vhost_blk.o 00:04:30.056 CC lib/vhost/rte_vhost_user.o 00:04:30.056 SO libspdk_ftl.so.9.0 00:04:30.318 SYMLINK libspdk_ftl.so 00:04:30.891 LIB libspdk_nvmf.a 00:04:30.891 SO libspdk_nvmf.so.20.0 00:04:30.891 LIB libspdk_vhost.a 00:04:30.891 SO libspdk_vhost.so.8.0 00:04:31.153 SYMLINK libspdk_nvmf.so 00:04:31.153 SYMLINK libspdk_vhost.so 00:04:31.153 LIB libspdk_iscsi.a 00:04:31.153 SO libspdk_iscsi.so.8.0 00:04:31.416 SYMLINK libspdk_iscsi.so 00:04:31.989 CC module/env_dpdk/env_dpdk_rpc.o 00:04:31.989 CC module/vfu_device/vfu_virtio.o 00:04:31.989 CC module/vfu_device/vfu_virtio_blk.o 00:04:31.989 CC module/vfu_device/vfu_virtio_scsi.o 00:04:31.989 CC module/vfu_device/vfu_virtio_rpc.o 00:04:31.989 CC module/vfu_device/vfu_virtio_fs.o 00:04:32.250 LIB libspdk_env_dpdk_rpc.a 00:04:32.250 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:32.250 CC module/accel/error/accel_error.o 00:04:32.250 CC module/accel/error/accel_error_rpc.o 00:04:32.250 CC module/blob/bdev/blob_bdev.o 00:04:32.250 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:32.250 CC module/accel/ioat/accel_ioat.o 00:04:32.250 CC module/keyring/linux/keyring.o 00:04:32.250 CC module/accel/ioat/accel_ioat_rpc.o 00:04:32.250 CC module/keyring/linux/keyring_rpc.o 00:04:32.250 CC module/scheduler/gscheduler/gscheduler.o 00:04:32.250 CC module/fsdev/aio/fsdev_aio.o 00:04:32.250 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:32.250 CC module/fsdev/aio/linux_aio_mgr.o 00:04:32.250 CC module/accel/iaa/accel_iaa.o 00:04:32.250 CC module/sock/posix/posix.o 00:04:32.250 CC module/accel/iaa/accel_iaa_rpc.o 00:04:32.250 CC module/accel/dsa/accel_dsa.o 00:04:32.250 CC module/accel/dsa/accel_dsa_rpc.o 00:04:32.250 CC module/keyring/file/keyring.o 00:04:32.250 CC module/keyring/file/keyring_rpc.o 00:04:32.250 SO libspdk_env_dpdk_rpc.so.6.0 00:04:32.250 SYMLINK libspdk_env_dpdk_rpc.so 00:04:32.250 LIB libspdk_scheduler_dpdk_governor.a 00:04:32.250 LIB libspdk_keyring_file.a 00:04:32.250 LIB libspdk_scheduler_gscheduler.a 00:04:32.512 LIB libspdk_keyring_linux.a 00:04:32.512 LIB libspdk_accel_ioat.a 00:04:32.512 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:32.512 SO libspdk_keyring_file.so.2.0 00:04:32.512 SO libspdk_scheduler_gscheduler.so.4.0 00:04:32.512 LIB libspdk_accel_error.a 00:04:32.512 LIB libspdk_scheduler_dynamic.a 00:04:32.512 SO libspdk_keyring_linux.so.1.0 00:04:32.512 LIB libspdk_accel_iaa.a 00:04:32.512 SO libspdk_accel_ioat.so.6.0 00:04:32.512 SO libspdk_accel_error.so.2.0 00:04:32.512 SO libspdk_scheduler_dynamic.so.4.0 00:04:32.512 SO libspdk_accel_iaa.so.3.0 00:04:32.512 SYMLINK libspdk_keyring_file.so 00:04:32.512 LIB libspdk_blob_bdev.a 00:04:32.512 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:32.512 SYMLINK libspdk_keyring_linux.so 00:04:32.512 SYMLINK libspdk_scheduler_gscheduler.so 00:04:32.512 SYMLINK libspdk_accel_ioat.so 00:04:32.512 LIB libspdk_accel_dsa.a 00:04:32.512 SO libspdk_blob_bdev.so.11.0 00:04:32.512 SYMLINK libspdk_scheduler_dynamic.so 00:04:32.512 SYMLINK libspdk_accel_error.so 00:04:32.512 SYMLINK libspdk_accel_iaa.so 00:04:32.512 SO libspdk_accel_dsa.so.5.0 00:04:32.512 LIB libspdk_vfu_device.a 00:04:32.512 SYMLINK libspdk_blob_bdev.so 00:04:32.512 SYMLINK libspdk_accel_dsa.so 00:04:32.774 SO libspdk_vfu_device.so.3.0 00:04:32.774 SYMLINK libspdk_vfu_device.so 00:04:32.774 LIB libspdk_sock_posix.a 00:04:32.774 SO libspdk_sock_posix.so.6.0 00:04:32.774 LIB libspdk_fsdev_aio.a 00:04:32.775 SO libspdk_fsdev_aio.so.1.0 00:04:32.775 SYMLINK libspdk_sock_posix.so 00:04:33.035 SYMLINK libspdk_fsdev_aio.so 00:04:33.035 CC module/blobfs/bdev/blobfs_bdev.o 00:04:33.035 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:33.035 CC module/bdev/lvol/vbdev_lvol.o 00:04:33.035 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:33.296 CC module/bdev/delay/vbdev_delay.o 00:04:33.296 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:33.296 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:33.296 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:33.296 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:33.296 CC module/bdev/gpt/gpt.o 00:04:33.296 CC module/bdev/error/vbdev_error.o 00:04:33.296 CC module/bdev/gpt/vbdev_gpt.o 00:04:33.296 CC module/bdev/malloc/bdev_malloc.o 00:04:33.296 CC module/bdev/error/vbdev_error_rpc.o 00:04:33.296 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:33.296 CC module/bdev/split/vbdev_split.o 00:04:33.296 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:33.296 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:33.296 CC module/bdev/split/vbdev_split_rpc.o 00:04:33.296 CC module/bdev/passthru/vbdev_passthru.o 00:04:33.296 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:33.296 CC module/bdev/raid/bdev_raid.o 00:04:33.296 CC module/bdev/raid/bdev_raid_sb.o 00:04:33.296 CC module/bdev/raid/bdev_raid_rpc.o 00:04:33.296 CC module/bdev/ftl/bdev_ftl.o 00:04:33.296 CC module/bdev/raid/raid0.o 00:04:33.296 CC module/bdev/nvme/bdev_nvme.o 00:04:33.296 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:33.296 CC module/bdev/raid/concat.o 00:04:33.296 CC module/bdev/raid/raid1.o 00:04:33.296 CC module/bdev/iscsi/bdev_iscsi.o 00:04:33.296 CC module/bdev/null/bdev_null.o 00:04:33.296 CC module/bdev/aio/bdev_aio.o 00:04:33.296 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:33.296 CC module/bdev/null/bdev_null_rpc.o 00:04:33.296 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:33.296 CC module/bdev/aio/bdev_aio_rpc.o 00:04:33.296 CC module/bdev/nvme/nvme_rpc.o 00:04:33.296 CC module/bdev/nvme/bdev_mdns_client.o 00:04:33.296 CC module/bdev/nvme/vbdev_opal.o 00:04:33.296 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:33.296 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:33.557 LIB libspdk_blobfs_bdev.a 00:04:33.557 SO libspdk_blobfs_bdev.so.6.0 00:04:33.557 LIB libspdk_bdev_split.a 00:04:33.557 LIB libspdk_bdev_error.a 00:04:33.557 SYMLINK libspdk_blobfs_bdev.so 00:04:33.557 SO libspdk_bdev_split.so.6.0 00:04:33.557 LIB libspdk_bdev_gpt.a 00:04:33.557 LIB libspdk_bdev_null.a 00:04:33.557 SO libspdk_bdev_error.so.6.0 00:04:33.557 LIB libspdk_bdev_ftl.a 00:04:33.557 SO libspdk_bdev_gpt.so.6.0 00:04:33.557 SO libspdk_bdev_null.so.6.0 00:04:33.557 SYMLINK libspdk_bdev_split.so 00:04:33.557 LIB libspdk_bdev_passthru.a 00:04:33.557 LIB libspdk_bdev_delay.a 00:04:33.557 LIB libspdk_bdev_malloc.a 00:04:33.557 SO libspdk_bdev_ftl.so.6.0 00:04:33.557 LIB libspdk_bdev_zone_block.a 00:04:33.557 LIB libspdk_bdev_aio.a 00:04:33.557 SO libspdk_bdev_delay.so.6.0 00:04:33.557 LIB libspdk_bdev_iscsi.a 00:04:33.557 SYMLINK libspdk_bdev_error.so 00:04:33.557 SO libspdk_bdev_passthru.so.6.0 00:04:33.557 SYMLINK libspdk_bdev_gpt.so 00:04:33.557 SO libspdk_bdev_malloc.so.6.0 00:04:33.557 SO libspdk_bdev_zone_block.so.6.0 00:04:33.557 SYMLINK libspdk_bdev_null.so 00:04:33.820 SO libspdk_bdev_aio.so.6.0 00:04:33.820 SO libspdk_bdev_iscsi.so.6.0 00:04:33.820 SYMLINK libspdk_bdev_ftl.so 00:04:33.820 SYMLINK libspdk_bdev_delay.so 00:04:33.820 SYMLINK libspdk_bdev_passthru.so 00:04:33.820 SYMLINK libspdk_bdev_malloc.so 00:04:33.820 SYMLINK libspdk_bdev_zone_block.so 00:04:33.820 LIB libspdk_bdev_lvol.a 00:04:33.820 SYMLINK libspdk_bdev_aio.so 00:04:33.820 SYMLINK libspdk_bdev_iscsi.so 00:04:33.820 LIB libspdk_bdev_virtio.a 00:04:33.820 SO libspdk_bdev_lvol.so.6.0 00:04:33.820 SO libspdk_bdev_virtio.so.6.0 00:04:33.820 SYMLINK libspdk_bdev_lvol.so 00:04:33.820 SYMLINK libspdk_bdev_virtio.so 00:04:34.082 LIB libspdk_bdev_raid.a 00:04:34.345 SO libspdk_bdev_raid.so.6.0 00:04:34.345 SYMLINK libspdk_bdev_raid.so 00:04:35.733 LIB libspdk_bdev_nvme.a 00:04:35.733 SO libspdk_bdev_nvme.so.7.1 00:04:35.733 SYMLINK libspdk_bdev_nvme.so 00:04:36.309 CC module/event/subsystems/iobuf/iobuf.o 00:04:36.309 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:36.309 CC module/event/subsystems/vmd/vmd.o 00:04:36.309 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:36.309 CC module/event/subsystems/keyring/keyring.o 00:04:36.309 CC module/event/subsystems/sock/sock.o 00:04:36.309 CC module/event/subsystems/scheduler/scheduler.o 00:04:36.309 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:36.309 CC module/event/subsystems/fsdev/fsdev.o 00:04:36.309 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:36.571 LIB libspdk_event_keyring.a 00:04:36.571 LIB libspdk_event_vhost_blk.a 00:04:36.571 LIB libspdk_event_vmd.a 00:04:36.571 LIB libspdk_event_iobuf.a 00:04:36.571 LIB libspdk_event_scheduler.a 00:04:36.571 LIB libspdk_event_fsdev.a 00:04:36.571 LIB libspdk_event_sock.a 00:04:36.571 LIB libspdk_event_vfu_tgt.a 00:04:36.571 SO libspdk_event_keyring.so.1.0 00:04:36.571 SO libspdk_event_vhost_blk.so.3.0 00:04:36.571 SO libspdk_event_fsdev.so.1.0 00:04:36.571 SO libspdk_event_vmd.so.6.0 00:04:36.571 SO libspdk_event_scheduler.so.4.0 00:04:36.571 SO libspdk_event_iobuf.so.3.0 00:04:36.571 SO libspdk_event_vfu_tgt.so.3.0 00:04:36.571 SO libspdk_event_sock.so.5.0 00:04:36.571 SYMLINK libspdk_event_keyring.so 00:04:36.571 SYMLINK libspdk_event_vhost_blk.so 00:04:36.571 SYMLINK libspdk_event_fsdev.so 00:04:36.571 SYMLINK libspdk_event_scheduler.so 00:04:36.571 SYMLINK libspdk_event_vmd.so 00:04:36.571 SYMLINK libspdk_event_iobuf.so 00:04:36.571 SYMLINK libspdk_event_vfu_tgt.so 00:04:36.571 SYMLINK libspdk_event_sock.so 00:04:37.145 CC module/event/subsystems/accel/accel.o 00:04:37.145 LIB libspdk_event_accel.a 00:04:37.145 SO libspdk_event_accel.so.6.0 00:04:37.406 SYMLINK libspdk_event_accel.so 00:04:37.668 CC module/event/subsystems/bdev/bdev.o 00:04:37.930 LIB libspdk_event_bdev.a 00:04:37.930 SO libspdk_event_bdev.so.6.0 00:04:37.930 SYMLINK libspdk_event_bdev.so 00:04:38.193 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:38.193 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:38.193 CC module/event/subsystems/scsi/scsi.o 00:04:38.193 CC module/event/subsystems/nbd/nbd.o 00:04:38.193 CC module/event/subsystems/ublk/ublk.o 00:04:38.454 LIB libspdk_event_ublk.a 00:04:38.454 LIB libspdk_event_nbd.a 00:04:38.454 LIB libspdk_event_scsi.a 00:04:38.454 SO libspdk_event_ublk.so.3.0 00:04:38.454 SO libspdk_event_nbd.so.6.0 00:04:38.454 SO libspdk_event_scsi.so.6.0 00:04:38.454 LIB libspdk_event_nvmf.a 00:04:38.454 SYMLINK libspdk_event_ublk.so 00:04:38.716 SO libspdk_event_nvmf.so.6.0 00:04:38.716 SYMLINK libspdk_event_nbd.so 00:04:38.716 SYMLINK libspdk_event_scsi.so 00:04:38.716 SYMLINK libspdk_event_nvmf.so 00:04:38.976 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:38.976 CC module/event/subsystems/iscsi/iscsi.o 00:04:39.237 LIB libspdk_event_vhost_scsi.a 00:04:39.237 LIB libspdk_event_iscsi.a 00:04:39.237 SO libspdk_event_vhost_scsi.so.3.0 00:04:39.237 SO libspdk_event_iscsi.so.6.0 00:04:39.237 SYMLINK libspdk_event_vhost_scsi.so 00:04:39.237 SYMLINK libspdk_event_iscsi.so 00:04:39.498 SO libspdk.so.6.0 00:04:39.498 SYMLINK libspdk.so 00:04:39.761 CXX app/trace/trace.o 00:04:39.761 CC app/trace_record/trace_record.o 00:04:39.761 CC app/spdk_top/spdk_top.o 00:04:39.761 CC app/spdk_nvme_identify/identify.o 00:04:39.761 CC app/spdk_nvme_perf/perf.o 00:04:39.761 TEST_HEADER include/spdk/accel.h 00:04:39.761 TEST_HEADER include/spdk/accel_module.h 00:04:39.761 CC app/spdk_lspci/spdk_lspci.o 00:04:39.761 TEST_HEADER include/spdk/assert.h 00:04:39.761 TEST_HEADER include/spdk/barrier.h 00:04:39.761 CC app/spdk_nvme_discover/discovery_aer.o 00:04:39.761 CC test/rpc_client/rpc_client_test.o 00:04:39.761 TEST_HEADER include/spdk/base64.h 00:04:39.761 TEST_HEADER include/spdk/bdev.h 00:04:39.761 TEST_HEADER include/spdk/bdev_module.h 00:04:39.761 TEST_HEADER include/spdk/bdev_zone.h 00:04:39.761 TEST_HEADER include/spdk/bit_array.h 00:04:39.761 TEST_HEADER include/spdk/blob_bdev.h 00:04:39.761 TEST_HEADER include/spdk/bit_pool.h 00:04:39.761 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:40.023 TEST_HEADER include/spdk/blobfs.h 00:04:40.023 TEST_HEADER include/spdk/blob.h 00:04:40.023 TEST_HEADER include/spdk/conf.h 00:04:40.023 TEST_HEADER include/spdk/config.h 00:04:40.023 TEST_HEADER include/spdk/cpuset.h 00:04:40.023 TEST_HEADER include/spdk/crc16.h 00:04:40.023 TEST_HEADER include/spdk/crc32.h 00:04:40.023 TEST_HEADER include/spdk/crc64.h 00:04:40.023 TEST_HEADER include/spdk/dif.h 00:04:40.023 TEST_HEADER include/spdk/dma.h 00:04:40.023 TEST_HEADER include/spdk/env_dpdk.h 00:04:40.023 TEST_HEADER include/spdk/endian.h 00:04:40.023 TEST_HEADER include/spdk/env.h 00:04:40.023 TEST_HEADER include/spdk/event.h 00:04:40.023 TEST_HEADER include/spdk/fd.h 00:04:40.023 TEST_HEADER include/spdk/fd_group.h 00:04:40.023 TEST_HEADER include/spdk/file.h 00:04:40.023 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:40.024 TEST_HEADER include/spdk/fsdev.h 00:04:40.024 CC app/spdk_dd/spdk_dd.o 00:04:40.024 TEST_HEADER include/spdk/fsdev_module.h 00:04:40.024 CC app/nvmf_tgt/nvmf_main.o 00:04:40.024 TEST_HEADER include/spdk/ftl.h 00:04:40.024 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:40.024 TEST_HEADER include/spdk/gpt_spec.h 00:04:40.024 TEST_HEADER include/spdk/hexlify.h 00:04:40.024 CC app/iscsi_tgt/iscsi_tgt.o 00:04:40.024 TEST_HEADER include/spdk/histogram_data.h 00:04:40.024 TEST_HEADER include/spdk/idxd.h 00:04:40.024 TEST_HEADER include/spdk/init.h 00:04:40.024 TEST_HEADER include/spdk/idxd_spec.h 00:04:40.024 TEST_HEADER include/spdk/ioat_spec.h 00:04:40.024 TEST_HEADER include/spdk/ioat.h 00:04:40.024 TEST_HEADER include/spdk/iscsi_spec.h 00:04:40.024 TEST_HEADER include/spdk/json.h 00:04:40.024 TEST_HEADER include/spdk/jsonrpc.h 00:04:40.024 TEST_HEADER include/spdk/keyring_module.h 00:04:40.024 TEST_HEADER include/spdk/keyring.h 00:04:40.024 TEST_HEADER include/spdk/lvol.h 00:04:40.024 TEST_HEADER include/spdk/log.h 00:04:40.024 TEST_HEADER include/spdk/likely.h 00:04:40.024 TEST_HEADER include/spdk/md5.h 00:04:40.024 TEST_HEADER include/spdk/memory.h 00:04:40.024 TEST_HEADER include/spdk/mmio.h 00:04:40.024 CC app/spdk_tgt/spdk_tgt.o 00:04:40.024 TEST_HEADER include/spdk/nbd.h 00:04:40.024 TEST_HEADER include/spdk/net.h 00:04:40.024 TEST_HEADER include/spdk/notify.h 00:04:40.024 TEST_HEADER include/spdk/nvme.h 00:04:40.024 TEST_HEADER include/spdk/nvme_intel.h 00:04:40.024 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:40.024 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:40.024 TEST_HEADER include/spdk/nvme_spec.h 00:04:40.024 TEST_HEADER include/spdk/nvme_zns.h 00:04:40.024 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:40.024 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:40.024 TEST_HEADER include/spdk/nvmf.h 00:04:40.024 TEST_HEADER include/spdk/nvmf_spec.h 00:04:40.024 TEST_HEADER include/spdk/nvmf_transport.h 00:04:40.024 TEST_HEADER include/spdk/opal_spec.h 00:04:40.024 TEST_HEADER include/spdk/opal.h 00:04:40.024 TEST_HEADER include/spdk/pci_ids.h 00:04:40.024 TEST_HEADER include/spdk/pipe.h 00:04:40.024 TEST_HEADER include/spdk/reduce.h 00:04:40.024 TEST_HEADER include/spdk/queue.h 00:04:40.024 TEST_HEADER include/spdk/rpc.h 00:04:40.024 TEST_HEADER include/spdk/scheduler.h 00:04:40.024 TEST_HEADER include/spdk/scsi.h 00:04:40.024 TEST_HEADER include/spdk/scsi_spec.h 00:04:40.024 TEST_HEADER include/spdk/string.h 00:04:40.024 TEST_HEADER include/spdk/sock.h 00:04:40.024 TEST_HEADER include/spdk/stdinc.h 00:04:40.024 TEST_HEADER include/spdk/thread.h 00:04:40.024 TEST_HEADER include/spdk/trace_parser.h 00:04:40.024 TEST_HEADER include/spdk/trace.h 00:04:40.024 TEST_HEADER include/spdk/tree.h 00:04:40.024 TEST_HEADER include/spdk/ublk.h 00:04:40.024 TEST_HEADER include/spdk/util.h 00:04:40.024 TEST_HEADER include/spdk/uuid.h 00:04:40.024 TEST_HEADER include/spdk/version.h 00:04:40.024 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:40.024 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:40.024 TEST_HEADER include/spdk/vhost.h 00:04:40.024 TEST_HEADER include/spdk/vmd.h 00:04:40.024 TEST_HEADER include/spdk/xor.h 00:04:40.024 TEST_HEADER include/spdk/zipf.h 00:04:40.024 CXX test/cpp_headers/accel.o 00:04:40.024 CXX test/cpp_headers/accel_module.o 00:04:40.024 CXX test/cpp_headers/assert.o 00:04:40.024 CXX test/cpp_headers/barrier.o 00:04:40.024 CXX test/cpp_headers/bdev.o 00:04:40.024 CXX test/cpp_headers/base64.o 00:04:40.024 CXX test/cpp_headers/bdev_module.o 00:04:40.024 CXX test/cpp_headers/bit_array.o 00:04:40.024 CXX test/cpp_headers/bdev_zone.o 00:04:40.024 CXX test/cpp_headers/bit_pool.o 00:04:40.024 CXX test/cpp_headers/blob_bdev.o 00:04:40.024 CXX test/cpp_headers/blobfs_bdev.o 00:04:40.024 CXX test/cpp_headers/blob.o 00:04:40.024 CXX test/cpp_headers/blobfs.o 00:04:40.024 CXX test/cpp_headers/conf.o 00:04:40.024 CXX test/cpp_headers/config.o 00:04:40.024 CXX test/cpp_headers/cpuset.o 00:04:40.024 CXX test/cpp_headers/crc16.o 00:04:40.024 CXX test/cpp_headers/crc32.o 00:04:40.024 CXX test/cpp_headers/crc64.o 00:04:40.024 CXX test/cpp_headers/dif.o 00:04:40.024 CXX test/cpp_headers/dma.o 00:04:40.024 CXX test/cpp_headers/endian.o 00:04:40.024 CXX test/cpp_headers/env_dpdk.o 00:04:40.024 CXX test/cpp_headers/env.o 00:04:40.024 CXX test/cpp_headers/event.o 00:04:40.024 CXX test/cpp_headers/fd_group.o 00:04:40.024 CXX test/cpp_headers/fd.o 00:04:40.024 CXX test/cpp_headers/file.o 00:04:40.024 CXX test/cpp_headers/fsdev.o 00:04:40.024 CXX test/cpp_headers/fsdev_module.o 00:04:40.024 CXX test/cpp_headers/ftl.o 00:04:40.024 CXX test/cpp_headers/fuse_dispatcher.o 00:04:40.024 CXX test/cpp_headers/hexlify.o 00:04:40.024 CXX test/cpp_headers/gpt_spec.o 00:04:40.024 CXX test/cpp_headers/idxd.o 00:04:40.024 CXX test/cpp_headers/histogram_data.o 00:04:40.024 CXX test/cpp_headers/idxd_spec.o 00:04:40.024 CXX test/cpp_headers/ioat.o 00:04:40.024 CXX test/cpp_headers/init.o 00:04:40.024 CXX test/cpp_headers/iscsi_spec.o 00:04:40.024 CXX test/cpp_headers/ioat_spec.o 00:04:40.024 CXX test/cpp_headers/json.o 00:04:40.024 CXX test/cpp_headers/keyring.o 00:04:40.024 CXX test/cpp_headers/jsonrpc.o 00:04:40.024 CXX test/cpp_headers/log.o 00:04:40.024 CXX test/cpp_headers/likely.o 00:04:40.024 CXX test/cpp_headers/keyring_module.o 00:04:40.024 CXX test/cpp_headers/lvol.o 00:04:40.024 CXX test/cpp_headers/md5.o 00:04:40.024 CXX test/cpp_headers/mmio.o 00:04:40.024 CXX test/cpp_headers/memory.o 00:04:40.024 CXX test/cpp_headers/net.o 00:04:40.024 CXX test/cpp_headers/nbd.o 00:04:40.024 CXX test/cpp_headers/notify.o 00:04:40.024 CXX test/cpp_headers/nvme_intel.o 00:04:40.024 CC examples/ioat/perf/perf.o 00:04:40.024 CXX test/cpp_headers/nvme.o 00:04:40.024 CXX test/cpp_headers/nvme_ocssd.o 00:04:40.024 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:40.024 CXX test/cpp_headers/nvme_spec.o 00:04:40.024 CC examples/util/zipf/zipf.o 00:04:40.024 CXX test/cpp_headers/nvme_zns.o 00:04:40.024 CXX test/cpp_headers/nvmf_cmd.o 00:04:40.024 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:40.024 CC test/app/jsoncat/jsoncat.o 00:04:40.024 CC test/thread/poller_perf/poller_perf.o 00:04:40.024 CXX test/cpp_headers/nvmf_transport.o 00:04:40.024 CXX test/cpp_headers/nvmf_spec.o 00:04:40.024 CXX test/cpp_headers/opal.o 00:04:40.024 CXX test/cpp_headers/nvmf.o 00:04:40.024 CXX test/cpp_headers/opal_spec.o 00:04:40.024 CXX test/cpp_headers/pci_ids.o 00:04:40.024 CXX test/cpp_headers/pipe.o 00:04:40.024 CXX test/cpp_headers/reduce.o 00:04:40.024 CXX test/cpp_headers/queue.o 00:04:40.024 CXX test/cpp_headers/rpc.o 00:04:40.024 CXX test/cpp_headers/scheduler.o 00:04:40.024 CC test/app/histogram_perf/histogram_perf.o 00:04:40.294 CXX test/cpp_headers/scsi_spec.o 00:04:40.294 CXX test/cpp_headers/scsi.o 00:04:40.294 LINK spdk_lspci 00:04:40.294 CC test/app/bdev_svc/bdev_svc.o 00:04:40.294 CXX test/cpp_headers/sock.o 00:04:40.294 CXX test/cpp_headers/stdinc.o 00:04:40.294 CXX test/cpp_headers/string.o 00:04:40.294 CXX test/cpp_headers/thread.o 00:04:40.294 CXX test/cpp_headers/trace.o 00:04:40.294 CC test/env/memory/memory_ut.o 00:04:40.294 CXX test/cpp_headers/trace_parser.o 00:04:40.294 CXX test/cpp_headers/tree.o 00:04:40.294 CXX test/cpp_headers/util.o 00:04:40.294 CXX test/cpp_headers/ublk.o 00:04:40.294 CXX test/cpp_headers/uuid.o 00:04:40.294 CC examples/ioat/verify/verify.o 00:04:40.294 CXX test/cpp_headers/version.o 00:04:40.294 CXX test/cpp_headers/vfio_user_pci.o 00:04:40.294 CXX test/cpp_headers/vfio_user_spec.o 00:04:40.294 CXX test/cpp_headers/vmd.o 00:04:40.294 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:40.294 CXX test/cpp_headers/vhost.o 00:04:40.294 CC test/env/vtophys/vtophys.o 00:04:40.294 CXX test/cpp_headers/zipf.o 00:04:40.294 CXX test/cpp_headers/xor.o 00:04:40.295 CC test/app/stub/stub.o 00:04:40.295 CC test/env/pci/pci_ut.o 00:04:40.295 CC app/fio/nvme/fio_plugin.o 00:04:40.295 CC app/fio/bdev/fio_plugin.o 00:04:40.295 CC test/dma/test_dma/test_dma.o 00:04:40.295 LINK rpc_client_test 00:04:40.295 LINK nvmf_tgt 00:04:40.295 LINK spdk_trace_record 00:04:40.295 LINK spdk_nvme_discover 00:04:40.565 LINK interrupt_tgt 00:04:40.565 LINK iscsi_tgt 00:04:40.565 LINK spdk_tgt 00:04:40.829 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:40.829 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:40.829 LINK poller_perf 00:04:40.829 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:40.829 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:40.829 CC test/env/mem_callbacks/mem_callbacks.o 00:04:41.091 LINK ioat_perf 00:04:41.091 LINK spdk_dd 00:04:41.091 LINK vtophys 00:04:41.091 LINK spdk_trace 00:04:41.091 LINK jsoncat 00:04:41.091 LINK zipf 00:04:41.091 LINK histogram_perf 00:04:41.091 LINK bdev_svc 00:04:41.353 LINK stub 00:04:41.353 LINK env_dpdk_post_init 00:04:41.353 LINK verify 00:04:41.353 LINK test_dma 00:04:41.353 CC test/event/reactor/reactor.o 00:04:41.353 CC test/event/reactor_perf/reactor_perf.o 00:04:41.353 CC test/event/event_perf/event_perf.o 00:04:41.615 CC test/event/app_repeat/app_repeat.o 00:04:41.615 CC test/event/scheduler/scheduler.o 00:04:41.615 CC app/vhost/vhost.o 00:04:41.615 LINK nvme_fuzz 00:04:41.615 LINK vhost_fuzz 00:04:41.615 LINK pci_ut 00:04:41.615 LINK reactor 00:04:41.615 LINK reactor_perf 00:04:41.615 LINK spdk_nvme_identify 00:04:41.615 LINK event_perf 00:04:41.615 LINK spdk_top 00:04:41.615 CC examples/sock/hello_world/hello_sock.o 00:04:41.615 CC examples/vmd/lsvmd/lsvmd.o 00:04:41.615 CC examples/vmd/led/led.o 00:04:41.615 LINK mem_callbacks 00:04:41.615 CC examples/idxd/perf/perf.o 00:04:41.615 LINK app_repeat 00:04:41.615 LINK spdk_nvme 00:04:41.878 CC examples/thread/thread/thread_ex.o 00:04:41.878 LINK scheduler 00:04:41.878 LINK vhost 00:04:41.878 LINK spdk_bdev 00:04:41.878 LINK spdk_nvme_perf 00:04:41.878 LINK lsvmd 00:04:41.878 LINK led 00:04:41.878 LINK hello_sock 00:04:42.138 CC test/nvme/aer/aer.o 00:04:42.138 CC test/nvme/e2edp/nvme_dp.o 00:04:42.138 CC test/nvme/fdp/fdp.o 00:04:42.138 CC test/nvme/reset/reset.o 00:04:42.138 CC test/nvme/cuse/cuse.o 00:04:42.138 CC test/nvme/reserve/reserve.o 00:04:42.138 CC test/nvme/compliance/nvme_compliance.o 00:04:42.138 CC test/nvme/simple_copy/simple_copy.o 00:04:42.138 CC test/nvme/sgl/sgl.o 00:04:42.138 CC test/nvme/connect_stress/connect_stress.o 00:04:42.138 CC test/nvme/overhead/overhead.o 00:04:42.138 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:42.138 CC test/accel/dif/dif.o 00:04:42.138 CC test/nvme/fused_ordering/fused_ordering.o 00:04:42.138 CC test/nvme/boot_partition/boot_partition.o 00:04:42.138 CC test/nvme/err_injection/err_injection.o 00:04:42.138 CC test/nvme/startup/startup.o 00:04:42.138 CC test/blobfs/mkfs/mkfs.o 00:04:42.138 LINK idxd_perf 00:04:42.138 LINK thread 00:04:42.138 LINK memory_ut 00:04:42.138 CC test/lvol/esnap/esnap.o 00:04:42.399 LINK boot_partition 00:04:42.399 LINK startup 00:04:42.399 LINK err_injection 00:04:42.399 LINK connect_stress 00:04:42.399 LINK doorbell_aers 00:04:42.399 LINK reserve 00:04:42.399 LINK fused_ordering 00:04:42.399 LINK reset 00:04:42.399 LINK simple_copy 00:04:42.399 LINK nvme_dp 00:04:42.399 LINK sgl 00:04:42.399 LINK aer 00:04:42.399 LINK mkfs 00:04:42.399 LINK overhead 00:04:42.399 LINK nvme_compliance 00:04:42.399 LINK fdp 00:04:42.664 CC examples/nvme/abort/abort.o 00:04:42.664 CC examples/nvme/arbitration/arbitration.o 00:04:42.664 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:42.664 CC examples/nvme/hello_world/hello_world.o 00:04:42.664 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:42.664 CC examples/nvme/hotplug/hotplug.o 00:04:42.664 CC examples/nvme/reconnect/reconnect.o 00:04:42.664 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:42.664 LINK dif 00:04:42.664 LINK iscsi_fuzz 00:04:42.664 CC examples/accel/perf/accel_perf.o 00:04:42.664 LINK pmr_persistence 00:04:42.664 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:42.664 CC examples/blob/cli/blobcli.o 00:04:42.664 CC examples/blob/hello_world/hello_blob.o 00:04:42.664 LINK cmb_copy 00:04:42.926 LINK hotplug 00:04:42.926 LINK hello_world 00:04:42.926 LINK arbitration 00:04:42.926 LINK reconnect 00:04:42.926 LINK abort 00:04:42.926 LINK nvme_manage 00:04:43.188 LINK hello_fsdev 00:04:43.188 LINK hello_blob 00:04:43.188 LINK accel_perf 00:04:43.188 LINK blobcli 00:04:43.188 LINK cuse 00:04:43.449 CC test/bdev/bdevio/bdevio.o 00:04:43.710 LINK bdevio 00:04:43.710 CC examples/bdev/hello_world/hello_bdev.o 00:04:43.710 CC examples/bdev/bdevperf/bdevperf.o 00:04:44.283 LINK hello_bdev 00:04:44.544 LINK bdevperf 00:04:45.116 CC examples/nvmf/nvmf/nvmf.o 00:04:45.688 LINK nvmf 00:04:46.631 LINK esnap 00:04:46.893 00:04:46.893 real 0m56.255s 00:04:46.893 user 7m58.646s 00:04:46.893 sys 5m34.973s 00:04:46.893 09:22:33 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:46.893 09:22:33 make -- common/autotest_common.sh@10 -- $ set +x 00:04:46.893 ************************************ 00:04:46.893 END TEST make 00:04:46.893 ************************************ 00:04:46.893 09:22:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:46.893 09:22:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:46.893 09:22:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:46.893 09:22:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:46.893 09:22:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:46.893 09:22:33 -- pm/common@44 -- $ pid=6940 00:04:46.893 09:22:33 -- pm/common@50 -- $ kill -TERM 6940 00:04:46.893 09:22:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:46.893 09:22:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:46.893 09:22:33 -- pm/common@44 -- $ pid=6942 00:04:46.893 09:22:33 -- pm/common@50 -- $ kill -TERM 6942 00:04:46.894 09:22:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:46.894 09:22:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:46.894 09:22:33 -- pm/common@44 -- $ pid=6943 00:04:46.894 09:22:33 -- pm/common@50 -- $ kill -TERM 6943 00:04:46.894 09:22:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:46.894 09:22:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:46.894 09:22:33 -- pm/common@44 -- $ pid=6966 00:04:46.894 09:22:33 -- pm/common@50 -- $ sudo -E kill -TERM 6966 00:04:46.894 09:22:33 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:46.894 09:22:33 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:46.894 09:22:33 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:46.894 09:22:33 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:46.894 09:22:33 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.157 09:22:33 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.157 09:22:33 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.157 09:22:33 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.157 09:22:33 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.157 09:22:33 -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.157 09:22:33 -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.157 09:22:33 -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.157 09:22:33 -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.157 09:22:33 -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.157 09:22:33 -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.157 09:22:33 -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.157 09:22:33 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.157 09:22:33 -- scripts/common.sh@344 -- # case "$op" in 00:04:47.157 09:22:33 -- scripts/common.sh@345 -- # : 1 00:04:47.157 09:22:33 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.157 09:22:33 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.157 09:22:33 -- scripts/common.sh@365 -- # decimal 1 00:04:47.157 09:22:33 -- scripts/common.sh@353 -- # local d=1 00:04:47.157 09:22:33 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.157 09:22:33 -- scripts/common.sh@355 -- # echo 1 00:04:47.157 09:22:33 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.157 09:22:33 -- scripts/common.sh@366 -- # decimal 2 00:04:47.157 09:22:33 -- scripts/common.sh@353 -- # local d=2 00:04:47.157 09:22:33 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.157 09:22:33 -- scripts/common.sh@355 -- # echo 2 00:04:47.157 09:22:33 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.157 09:22:33 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.157 09:22:33 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.157 09:22:33 -- scripts/common.sh@368 -- # return 0 00:04:47.157 09:22:33 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.157 09:22:33 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.157 --rc genhtml_branch_coverage=1 00:04:47.157 --rc genhtml_function_coverage=1 00:04:47.157 --rc genhtml_legend=1 00:04:47.157 --rc geninfo_all_blocks=1 00:04:47.157 --rc geninfo_unexecuted_blocks=1 00:04:47.157 00:04:47.157 ' 00:04:47.157 09:22:33 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.157 --rc genhtml_branch_coverage=1 00:04:47.157 --rc genhtml_function_coverage=1 00:04:47.157 --rc genhtml_legend=1 00:04:47.157 --rc geninfo_all_blocks=1 00:04:47.157 --rc geninfo_unexecuted_blocks=1 00:04:47.157 00:04:47.157 ' 00:04:47.157 09:22:33 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.157 --rc genhtml_branch_coverage=1 00:04:47.157 --rc genhtml_function_coverage=1 00:04:47.157 --rc genhtml_legend=1 00:04:47.157 --rc geninfo_all_blocks=1 00:04:47.157 --rc geninfo_unexecuted_blocks=1 00:04:47.157 00:04:47.157 ' 00:04:47.157 09:22:33 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.157 --rc genhtml_branch_coverage=1 00:04:47.157 --rc genhtml_function_coverage=1 00:04:47.157 --rc genhtml_legend=1 00:04:47.157 --rc geninfo_all_blocks=1 00:04:47.157 --rc geninfo_unexecuted_blocks=1 00:04:47.157 00:04:47.157 ' 00:04:47.157 09:22:33 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:47.157 09:22:33 -- nvmf/common.sh@7 -- # uname -s 00:04:47.157 09:22:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.157 09:22:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.157 09:22:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.157 09:22:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.157 09:22:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.157 09:22:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.157 09:22:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.157 09:22:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.157 09:22:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.157 09:22:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.157 09:22:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:47.157 09:22:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:47.157 09:22:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.157 09:22:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.157 09:22:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:47.158 09:22:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.158 09:22:33 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:47.158 09:22:33 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.158 09:22:33 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.158 09:22:33 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.158 09:22:33 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.158 09:22:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.158 09:22:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.158 09:22:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.158 09:22:33 -- paths/export.sh@5 -- # export PATH 00:04:47.158 09:22:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.158 09:22:33 -- nvmf/common.sh@51 -- # : 0 00:04:47.158 09:22:33 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.158 09:22:33 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.158 09:22:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.158 09:22:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.158 09:22:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.158 09:22:33 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.158 09:22:33 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.158 09:22:33 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.158 09:22:33 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.158 09:22:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:47.158 09:22:33 -- spdk/autotest.sh@32 -- # uname -s 00:04:47.158 09:22:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:47.158 09:22:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:47.158 09:22:33 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:47.158 09:22:33 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:47.158 09:22:33 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:47.158 09:22:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:47.158 09:22:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:47.158 09:22:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:47.158 09:22:33 -- spdk/autotest.sh@48 -- # udevadm_pid=73686 00:04:47.158 09:22:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:47.158 09:22:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:47.158 09:22:33 -- pm/common@17 -- # local monitor 00:04:47.158 09:22:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:47.158 09:22:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:47.158 09:22:33 -- pm/common@21 -- # date +%s 00:04:47.158 09:22:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:47.158 09:22:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:47.158 09:22:33 -- pm/common@21 -- # date +%s 00:04:47.158 09:22:33 -- pm/common@25 -- # sleep 1 00:04:47.158 09:22:33 -- pm/common@21 -- # date +%s 00:04:47.158 09:22:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732004553 00:04:47.158 09:22:33 -- pm/common@21 -- # date +%s 00:04:47.158 09:22:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732004553 00:04:47.158 09:22:33 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732004553 00:04:47.158 09:22:33 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732004553 00:04:47.420 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732004553_collect-cpu-load.pm.log 00:04:47.420 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732004553_collect-vmstat.pm.log 00:04:47.420 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732004553_collect-cpu-temp.pm.log 00:04:47.420 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732004553_collect-bmc-pm.bmc.pm.log 00:04:48.365 09:22:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:48.365 09:22:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:48.365 09:22:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.365 09:22:34 -- common/autotest_common.sh@10 -- # set +x 00:04:48.365 09:22:34 -- spdk/autotest.sh@59 -- # create_test_list 00:04:48.365 09:22:34 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:48.365 09:22:34 -- common/autotest_common.sh@10 -- # set +x 00:04:48.365 09:22:34 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:48.365 09:22:34 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:48.365 09:22:34 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:48.365 09:22:34 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:48.365 09:22:34 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:48.365 09:22:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:48.365 09:22:34 -- common/autotest_common.sh@1457 -- # uname 00:04:48.365 09:22:34 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:48.365 09:22:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:48.365 09:22:34 -- common/autotest_common.sh@1477 -- # uname 00:04:48.365 09:22:34 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:48.365 09:22:34 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:48.365 09:22:34 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:48.365 lcov: LCOV version 1.15 00:04:48.365 09:22:35 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:14.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:14.979 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:19.195 09:23:05 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:19.195 09:23:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.195 09:23:05 -- common/autotest_common.sh@10 -- # set +x 00:05:19.195 09:23:05 -- spdk/autotest.sh@78 -- # rm -f 00:05:19.195 09:23:05 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:22.502 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:05:22.502 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:05:22.502 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:05:22.502 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:05:22.502 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:05:22.502 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:05:22.502 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:05:22.502 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:05:22.502 0000:65:00.0 (144d a80a): Already using the nvme driver 00:05:22.502 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:05:22.502 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:05:22.502 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:05:22.502 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:05:22.502 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:05:22.502 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:05:22.502 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:05:22.502 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:05:22.764 09:23:09 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:22.764 09:23:09 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:22.764 09:23:09 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:22.764 09:23:09 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:22.764 09:23:09 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:22.764 09:23:09 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:22.764 09:23:09 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:22.764 09:23:09 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:22.764 09:23:09 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:22.764 09:23:09 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:22.764 09:23:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:22.764 09:23:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:22.764 09:23:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:22.764 09:23:09 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:22.764 09:23:09 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:22.764 No valid GPT data, bailing 00:05:22.764 09:23:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:22.764 09:23:09 -- scripts/common.sh@394 -- # pt= 00:05:22.764 09:23:09 -- scripts/common.sh@395 -- # return 1 00:05:22.764 09:23:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:22.764 1+0 records in 00:05:22.764 1+0 records out 00:05:22.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049904 s, 210 MB/s 00:05:22.764 09:23:09 -- spdk/autotest.sh@105 -- # sync 00:05:22.764 09:23:09 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:22.764 09:23:09 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:22.764 09:23:09 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:32.863 09:23:17 -- spdk/autotest.sh@111 -- # uname -s 00:05:32.863 09:23:17 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:32.863 09:23:17 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:32.863 09:23:17 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:34.780 Hugepages 00:05:34.780 node hugesize free / total 00:05:34.780 node0 1048576kB 0 / 0 00:05:34.780 node0 2048kB 0 / 0 00:05:34.780 node1 1048576kB 0 / 0 00:05:34.780 node1 2048kB 0 / 0 00:05:34.780 00:05:34.780 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:34.780 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:34.780 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:34.780 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:34.780 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:34.780 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:34.780 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:34.780 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:34.780 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:35.042 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:35.042 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:35.042 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:35.042 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:35.042 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:35.042 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:35.042 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:35.042 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:35.042 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:35.042 09:23:21 -- spdk/autotest.sh@117 -- # uname -s 00:05:35.042 09:23:21 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:35.042 09:23:21 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:35.042 09:23:21 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:38.349 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:38.349 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:38.349 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:38.611 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:38.611 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:38.611 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:38.611 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:38.611 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:38.611 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:38.611 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:38.611 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:38.611 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:38.611 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:38.611 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:38.611 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:38.611 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:40.528 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:40.789 09:23:27 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:41.734 09:23:28 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:41.734 09:23:28 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:41.734 09:23:28 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:41.734 09:23:28 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:41.734 09:23:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:41.734 09:23:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:41.734 09:23:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:41.734 09:23:28 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:41.734 09:23:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:41.734 09:23:28 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:41.734 09:23:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:41.734 09:23:28 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:45.954 Waiting for block devices as requested 00:05:45.954 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:45.954 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:45.954 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:45.954 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:45.954 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:45.954 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:45.954 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:45.954 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:45.954 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:46.217 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:46.217 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:46.217 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:46.480 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:46.480 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:46.480 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:46.480 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:46.741 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:47.002 09:23:33 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:47.002 09:23:33 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:47.002 09:23:33 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:47.002 09:23:33 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:05:47.002 09:23:33 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:47.002 09:23:33 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:47.002 09:23:33 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:47.002 09:23:33 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:47.002 09:23:33 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:47.002 09:23:33 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:47.002 09:23:33 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:47.002 09:23:33 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:47.002 09:23:33 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:47.002 09:23:33 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:05:47.002 09:23:33 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:47.002 09:23:33 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:47.002 09:23:33 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:47.002 09:23:33 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:47.002 09:23:33 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:47.002 09:23:33 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:47.002 09:23:33 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:47.002 09:23:33 -- common/autotest_common.sh@1543 -- # continue 00:05:47.002 09:23:33 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:47.002 09:23:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:47.002 09:23:33 -- common/autotest_common.sh@10 -- # set +x 00:05:47.002 09:23:33 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:47.002 09:23:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.002 09:23:33 -- common/autotest_common.sh@10 -- # set +x 00:05:47.002 09:23:33 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:51.216 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:51.216 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:51.216 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:51.216 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:51.216 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:51.216 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:51.216 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:51.216 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:51.216 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:51.216 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:51.216 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:51.216 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:51.216 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:51.216 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:51.216 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:51.216 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:51.216 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:51.216 09:23:37 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:51.216 09:23:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:51.216 09:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:51.216 09:23:37 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:51.216 09:23:37 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:51.216 09:23:37 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:51.216 09:23:37 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:51.216 09:23:37 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:51.216 09:23:37 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:51.216 09:23:37 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:51.216 09:23:37 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:51.216 09:23:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:51.216 09:23:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:51.216 09:23:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:51.216 09:23:37 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:51.216 09:23:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:51.216 09:23:37 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:51.216 09:23:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:51.216 09:23:37 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:51.216 09:23:37 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:51.216 09:23:37 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:05:51.216 09:23:37 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:51.216 09:23:37 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:51.216 09:23:37 -- common/autotest_common.sh@1572 -- # return 0 00:05:51.216 09:23:37 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:51.216 09:23:37 -- common/autotest_common.sh@1580 -- # return 0 00:05:51.216 09:23:37 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:51.216 09:23:37 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:51.216 09:23:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:51.216 09:23:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:51.216 09:23:37 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:51.216 09:23:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.216 09:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:51.216 09:23:37 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:51.216 09:23:37 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:51.216 09:23:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.216 09:23:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.216 09:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:51.216 ************************************ 00:05:51.216 START TEST env 00:05:51.216 ************************************ 00:05:51.216 09:23:37 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:51.478 * Looking for test storage... 00:05:51.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:51.478 09:23:38 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.478 09:23:38 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.478 09:23:38 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.478 09:23:38 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.478 09:23:38 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.478 09:23:38 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.478 09:23:38 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.478 09:23:38 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.478 09:23:38 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.478 09:23:38 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.478 09:23:38 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.478 09:23:38 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.478 09:23:38 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.478 09:23:38 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.478 09:23:38 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.478 09:23:38 env -- scripts/common.sh@344 -- # case "$op" in 00:05:51.478 09:23:38 env -- scripts/common.sh@345 -- # : 1 00:05:51.478 09:23:38 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.478 09:23:38 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.478 09:23:38 env -- scripts/common.sh@365 -- # decimal 1 00:05:51.478 09:23:38 env -- scripts/common.sh@353 -- # local d=1 00:05:51.478 09:23:38 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.478 09:23:38 env -- scripts/common.sh@355 -- # echo 1 00:05:51.478 09:23:38 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.478 09:23:38 env -- scripts/common.sh@366 -- # decimal 2 00:05:51.478 09:23:38 env -- scripts/common.sh@353 -- # local d=2 00:05:51.478 09:23:38 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.478 09:23:38 env -- scripts/common.sh@355 -- # echo 2 00:05:51.478 09:23:38 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.478 09:23:38 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.478 09:23:38 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.478 09:23:38 env -- scripts/common.sh@368 -- # return 0 00:05:51.478 09:23:38 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.478 09:23:38 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.478 --rc genhtml_branch_coverage=1 00:05:51.478 --rc genhtml_function_coverage=1 00:05:51.478 --rc genhtml_legend=1 00:05:51.478 --rc geninfo_all_blocks=1 00:05:51.478 --rc geninfo_unexecuted_blocks=1 00:05:51.478 00:05:51.478 ' 00:05:51.478 09:23:38 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.478 --rc genhtml_branch_coverage=1 00:05:51.478 --rc genhtml_function_coverage=1 00:05:51.478 --rc genhtml_legend=1 00:05:51.478 --rc geninfo_all_blocks=1 00:05:51.478 --rc geninfo_unexecuted_blocks=1 00:05:51.478 00:05:51.478 ' 00:05:51.478 09:23:38 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.478 --rc genhtml_branch_coverage=1 00:05:51.478 --rc genhtml_function_coverage=1 00:05:51.478 --rc genhtml_legend=1 00:05:51.478 --rc geninfo_all_blocks=1 00:05:51.478 --rc geninfo_unexecuted_blocks=1 00:05:51.478 00:05:51.478 ' 00:05:51.478 09:23:38 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.478 --rc genhtml_branch_coverage=1 00:05:51.478 --rc genhtml_function_coverage=1 00:05:51.478 --rc genhtml_legend=1 00:05:51.478 --rc geninfo_all_blocks=1 00:05:51.478 --rc geninfo_unexecuted_blocks=1 00:05:51.478 00:05:51.478 ' 00:05:51.478 09:23:38 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:51.478 09:23:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.478 09:23:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.478 09:23:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.478 ************************************ 00:05:51.478 START TEST env_memory 00:05:51.478 ************************************ 00:05:51.478 09:23:38 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:51.478 00:05:51.478 00:05:51.478 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.478 http://cunit.sourceforge.net/ 00:05:51.478 00:05:51.478 00:05:51.478 Suite: memory 00:05:51.478 Test: alloc and free memory map ...[2024-11-19 09:23:38.204697] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:51.478 passed 00:05:51.740 Test: mem map translation ...[2024-11-19 09:23:38.222507] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:51.741 [2024-11-19 09:23:38.222531] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:51.741 [2024-11-19 09:23:38.222565] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:51.741 [2024-11-19 09:23:38.222572] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:51.741 passed 00:05:51.741 Test: mem map registration ...[2024-11-19 09:23:38.260546] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:51.741 [2024-11-19 09:23:38.260570] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:51.741 passed 00:05:51.741 Test: mem map adjacent registrations ...passed 00:05:51.741 00:05:51.741 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.741 suites 1 1 n/a 0 0 00:05:51.741 tests 4 4 4 0 0 00:05:51.741 asserts 152 152 152 0 n/a 00:05:51.741 00:05:51.741 Elapsed time = 0.125 seconds 00:05:51.741 00:05:51.741 real 0m0.141s 00:05:51.741 user 0m0.123s 00:05:51.741 sys 0m0.016s 00:05:51.741 09:23:38 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.741 09:23:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:51.741 ************************************ 00:05:51.741 END TEST env_memory 00:05:51.741 ************************************ 00:05:51.741 09:23:38 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:51.741 09:23:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.741 09:23:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.741 09:23:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.741 ************************************ 00:05:51.741 START TEST env_vtophys 00:05:51.741 ************************************ 00:05:51.741 09:23:38 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:51.741 EAL: lib.eal log level changed from notice to debug 00:05:51.741 EAL: Detected lcore 0 as core 0 on socket 0 00:05:51.741 EAL: Detected lcore 1 as core 1 on socket 0 00:05:51.741 EAL: Detected lcore 2 as core 2 on socket 0 00:05:51.741 EAL: Detected lcore 3 as core 3 on socket 0 00:05:51.741 EAL: Detected lcore 4 as core 4 on socket 0 00:05:51.741 EAL: Detected lcore 5 as core 5 on socket 0 00:05:51.741 EAL: Detected lcore 6 as core 6 on socket 0 00:05:51.741 EAL: Detected lcore 7 as core 7 on socket 0 00:05:51.741 EAL: Detected lcore 8 as core 8 on socket 0 00:05:51.741 EAL: Detected lcore 9 as core 9 on socket 0 00:05:51.741 EAL: Detected lcore 10 as core 10 on socket 0 00:05:51.741 EAL: Detected lcore 11 as core 11 on socket 0 00:05:51.741 EAL: Detected lcore 12 as core 12 on socket 0 00:05:51.741 EAL: Detected lcore 13 as core 13 on socket 0 00:05:51.741 EAL: Detected lcore 14 as core 14 on socket 0 00:05:51.741 EAL: Detected lcore 15 as core 15 on socket 0 00:05:51.741 EAL: Detected lcore 16 as core 16 on socket 0 00:05:51.741 EAL: Detected lcore 17 as core 17 on socket 0 00:05:51.741 EAL: Detected lcore 18 as core 18 on socket 0 00:05:51.741 EAL: Detected lcore 19 as core 19 on socket 0 00:05:51.741 EAL: Detected lcore 20 as core 20 on socket 0 00:05:51.741 EAL: Detected lcore 21 as core 21 on socket 0 00:05:51.741 EAL: Detected lcore 22 as core 22 on socket 0 00:05:51.741 EAL: Detected lcore 23 as core 23 on socket 0 00:05:51.741 EAL: Detected lcore 24 as core 24 on socket 0 00:05:51.741 EAL: Detected lcore 25 as core 25 on socket 0 00:05:51.741 EAL: Detected lcore 26 as core 26 on socket 0 00:05:51.741 EAL: Detected lcore 27 as core 27 on socket 0 00:05:51.741 EAL: Detected lcore 28 as core 28 on socket 0 00:05:51.741 EAL: Detected lcore 29 as core 29 on socket 0 00:05:51.741 EAL: Detected lcore 30 as core 30 on socket 0 00:05:51.741 EAL: Detected lcore 31 as core 31 on socket 0 00:05:51.741 EAL: Detected lcore 32 as core 32 on socket 0 00:05:51.741 EAL: Detected lcore 33 as core 33 on socket 0 00:05:51.741 EAL: Detected lcore 34 as core 34 on socket 0 00:05:51.741 EAL: Detected lcore 35 as core 35 on socket 0 00:05:51.741 EAL: Detected lcore 36 as core 0 on socket 1 00:05:51.741 EAL: Detected lcore 37 as core 1 on socket 1 00:05:51.741 EAL: Detected lcore 38 as core 2 on socket 1 00:05:51.741 EAL: Detected lcore 39 as core 3 on socket 1 00:05:51.741 EAL: Detected lcore 40 as core 4 on socket 1 00:05:51.741 EAL: Detected lcore 41 as core 5 on socket 1 00:05:51.741 EAL: Detected lcore 42 as core 6 on socket 1 00:05:51.741 EAL: Detected lcore 43 as core 7 on socket 1 00:05:51.741 EAL: Detected lcore 44 as core 8 on socket 1 00:05:51.741 EAL: Detected lcore 45 as core 9 on socket 1 00:05:51.741 EAL: Detected lcore 46 as core 10 on socket 1 00:05:51.741 EAL: Detected lcore 47 as core 11 on socket 1 00:05:51.741 EAL: Detected lcore 48 as core 12 on socket 1 00:05:51.741 EAL: Detected lcore 49 as core 13 on socket 1 00:05:51.741 EAL: Detected lcore 50 as core 14 on socket 1 00:05:51.741 EAL: Detected lcore 51 as core 15 on socket 1 00:05:51.741 EAL: Detected lcore 52 as core 16 on socket 1 00:05:51.741 EAL: Detected lcore 53 as core 17 on socket 1 00:05:51.741 EAL: Detected lcore 54 as core 18 on socket 1 00:05:51.741 EAL: Detected lcore 55 as core 19 on socket 1 00:05:51.741 EAL: Detected lcore 56 as core 20 on socket 1 00:05:51.741 EAL: Detected lcore 57 as core 21 on socket 1 00:05:51.741 EAL: Detected lcore 58 as core 22 on socket 1 00:05:51.741 EAL: Detected lcore 59 as core 23 on socket 1 00:05:51.741 EAL: Detected lcore 60 as core 24 on socket 1 00:05:51.741 EAL: Detected lcore 61 as core 25 on socket 1 00:05:51.741 EAL: Detected lcore 62 as core 26 on socket 1 00:05:51.741 EAL: Detected lcore 63 as core 27 on socket 1 00:05:51.741 EAL: Detected lcore 64 as core 28 on socket 1 00:05:51.741 EAL: Detected lcore 65 as core 29 on socket 1 00:05:51.741 EAL: Detected lcore 66 as core 30 on socket 1 00:05:51.741 EAL: Detected lcore 67 as core 31 on socket 1 00:05:51.741 EAL: Detected lcore 68 as core 32 on socket 1 00:05:51.741 EAL: Detected lcore 69 as core 33 on socket 1 00:05:51.741 EAL: Detected lcore 70 as core 34 on socket 1 00:05:51.741 EAL: Detected lcore 71 as core 35 on socket 1 00:05:51.741 EAL: Detected lcore 72 as core 0 on socket 0 00:05:51.741 EAL: Detected lcore 73 as core 1 on socket 0 00:05:51.741 EAL: Detected lcore 74 as core 2 on socket 0 00:05:51.741 EAL: Detected lcore 75 as core 3 on socket 0 00:05:51.741 EAL: Detected lcore 76 as core 4 on socket 0 00:05:51.741 EAL: Detected lcore 77 as core 5 on socket 0 00:05:51.741 EAL: Detected lcore 78 as core 6 on socket 0 00:05:51.741 EAL: Detected lcore 79 as core 7 on socket 0 00:05:51.741 EAL: Detected lcore 80 as core 8 on socket 0 00:05:51.741 EAL: Detected lcore 81 as core 9 on socket 0 00:05:51.741 EAL: Detected lcore 82 as core 10 on socket 0 00:05:51.741 EAL: Detected lcore 83 as core 11 on socket 0 00:05:51.741 EAL: Detected lcore 84 as core 12 on socket 0 00:05:51.741 EAL: Detected lcore 85 as core 13 on socket 0 00:05:51.741 EAL: Detected lcore 86 as core 14 on socket 0 00:05:51.741 EAL: Detected lcore 87 as core 15 on socket 0 00:05:51.741 EAL: Detected lcore 88 as core 16 on socket 0 00:05:51.741 EAL: Detected lcore 89 as core 17 on socket 0 00:05:51.741 EAL: Detected lcore 90 as core 18 on socket 0 00:05:51.741 EAL: Detected lcore 91 as core 19 on socket 0 00:05:51.741 EAL: Detected lcore 92 as core 20 on socket 0 00:05:51.741 EAL: Detected lcore 93 as core 21 on socket 0 00:05:51.741 EAL: Detected lcore 94 as core 22 on socket 0 00:05:51.741 EAL: Detected lcore 95 as core 23 on socket 0 00:05:51.741 EAL: Detected lcore 96 as core 24 on socket 0 00:05:51.741 EAL: Detected lcore 97 as core 25 on socket 0 00:05:51.741 EAL: Detected lcore 98 as core 26 on socket 0 00:05:51.741 EAL: Detected lcore 99 as core 27 on socket 0 00:05:51.741 EAL: Detected lcore 100 as core 28 on socket 0 00:05:51.741 EAL: Detected lcore 101 as core 29 on socket 0 00:05:51.741 EAL: Detected lcore 102 as core 30 on socket 0 00:05:51.741 EAL: Detected lcore 103 as core 31 on socket 0 00:05:51.741 EAL: Detected lcore 104 as core 32 on socket 0 00:05:51.741 EAL: Detected lcore 105 as core 33 on socket 0 00:05:51.741 EAL: Detected lcore 106 as core 34 on socket 0 00:05:51.741 EAL: Detected lcore 107 as core 35 on socket 0 00:05:51.741 EAL: Detected lcore 108 as core 0 on socket 1 00:05:51.741 EAL: Detected lcore 109 as core 1 on socket 1 00:05:51.741 EAL: Detected lcore 110 as core 2 on socket 1 00:05:51.741 EAL: Detected lcore 111 as core 3 on socket 1 00:05:51.741 EAL: Detected lcore 112 as core 4 on socket 1 00:05:51.741 EAL: Detected lcore 113 as core 5 on socket 1 00:05:51.741 EAL: Detected lcore 114 as core 6 on socket 1 00:05:51.741 EAL: Detected lcore 115 as core 7 on socket 1 00:05:51.741 EAL: Detected lcore 116 as core 8 on socket 1 00:05:51.741 EAL: Detected lcore 117 as core 9 on socket 1 00:05:51.741 EAL: Detected lcore 118 as core 10 on socket 1 00:05:51.741 EAL: Detected lcore 119 as core 11 on socket 1 00:05:51.741 EAL: Detected lcore 120 as core 12 on socket 1 00:05:51.741 EAL: Detected lcore 121 as core 13 on socket 1 00:05:51.741 EAL: Detected lcore 122 as core 14 on socket 1 00:05:51.741 EAL: Detected lcore 123 as core 15 on socket 1 00:05:51.741 EAL: Detected lcore 124 as core 16 on socket 1 00:05:51.741 EAL: Detected lcore 125 as core 17 on socket 1 00:05:51.741 EAL: Detected lcore 126 as core 18 on socket 1 00:05:51.741 EAL: Detected lcore 127 as core 19 on socket 1 00:05:51.741 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:51.741 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:51.741 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:51.741 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:51.741 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:51.741 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:51.741 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:51.741 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:51.741 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:51.741 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:51.741 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:51.741 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:51.741 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:51.742 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:51.742 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:51.742 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:51.742 EAL: Maximum logical cores by configuration: 128 00:05:51.742 EAL: Detected CPU lcores: 128 00:05:51.742 EAL: Detected NUMA nodes: 2 00:05:51.742 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:51.742 EAL: Detected shared linkage of DPDK 00:05:51.742 EAL: No shared files mode enabled, IPC will be disabled 00:05:51.742 EAL: Bus pci wants IOVA as 'DC' 00:05:51.742 EAL: Buses did not request a specific IOVA mode. 00:05:51.742 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:51.742 EAL: Selected IOVA mode 'VA' 00:05:51.742 EAL: Probing VFIO support... 00:05:51.742 EAL: IOMMU type 1 (Type 1) is supported 00:05:51.742 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:51.742 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:51.742 EAL: VFIO support initialized 00:05:51.742 EAL: Ask a virtual area of 0x2e000 bytes 00:05:51.742 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:51.742 EAL: Setting up physically contiguous memory... 00:05:51.742 EAL: Setting maximum number of open files to 524288 00:05:51.742 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:51.742 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:51.742 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:51.742 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.742 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:51.742 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.742 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.742 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:51.742 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:51.742 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.742 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:51.742 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.742 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.742 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:51.742 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:51.742 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.742 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:51.742 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.742 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.742 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:51.742 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:51.742 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.742 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:51.742 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.742 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.742 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:51.742 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:51.742 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:51.742 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.742 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:51.742 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:51.742 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.742 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:51.742 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:51.742 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.742 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:51.742 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:51.742 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.742 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:51.742 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:51.742 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.742 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:51.742 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:51.742 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.742 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:51.742 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:51.742 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.742 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:51.742 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:51.742 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.742 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:51.742 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:51.742 EAL: Hugepages will be freed exactly as allocated. 00:05:51.742 EAL: No shared files mode enabled, IPC is disabled 00:05:51.742 EAL: No shared files mode enabled, IPC is disabled 00:05:51.742 EAL: TSC frequency is ~2400000 KHz 00:05:51.742 EAL: Main lcore 0 is ready (tid=7f4cbb555a00;cpuset=[0]) 00:05:51.742 EAL: Trying to obtain current memory policy. 00:05:51.742 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.742 EAL: Restoring previous memory policy: 0 00:05:51.742 EAL: request: mp_malloc_sync 00:05:51.742 EAL: No shared files mode enabled, IPC is disabled 00:05:51.742 EAL: Heap on socket 0 was expanded by 2MB 00:05:51.742 EAL: No shared files mode enabled, IPC is disabled 00:05:52.004 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:52.004 EAL: Mem event callback 'spdk:(nil)' registered 00:05:52.004 00:05:52.004 00:05:52.004 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.004 http://cunit.sourceforge.net/ 00:05:52.004 00:05:52.004 00:05:52.004 Suite: components_suite 00:05:52.004 Test: vtophys_malloc_test ...passed 00:05:52.004 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:52.004 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.004 EAL: Restoring previous memory policy: 4 00:05:52.004 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.004 EAL: request: mp_malloc_sync 00:05:52.004 EAL: No shared files mode enabled, IPC is disabled 00:05:52.004 EAL: Heap on socket 0 was expanded by 4MB 00:05:52.004 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.004 EAL: request: mp_malloc_sync 00:05:52.004 EAL: No shared files mode enabled, IPC is disabled 00:05:52.004 EAL: Heap on socket 0 was shrunk by 4MB 00:05:52.004 EAL: Trying to obtain current memory policy. 00:05:52.004 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.004 EAL: Restoring previous memory policy: 4 00:05:52.004 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.004 EAL: request: mp_malloc_sync 00:05:52.004 EAL: No shared files mode enabled, IPC is disabled 00:05:52.004 EAL: Heap on socket 0 was expanded by 6MB 00:05:52.004 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.004 EAL: request: mp_malloc_sync 00:05:52.004 EAL: No shared files mode enabled, IPC is disabled 00:05:52.004 EAL: Heap on socket 0 was shrunk by 6MB 00:05:52.004 EAL: Trying to obtain current memory policy. 00:05:52.004 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.004 EAL: Restoring previous memory policy: 4 00:05:52.004 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.004 EAL: request: mp_malloc_sync 00:05:52.004 EAL: No shared files mode enabled, IPC is disabled 00:05:52.004 EAL: Heap on socket 0 was expanded by 10MB 00:05:52.004 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.004 EAL: request: mp_malloc_sync 00:05:52.004 EAL: No shared files mode enabled, IPC is disabled 00:05:52.004 EAL: Heap on socket 0 was shrunk by 10MB 00:05:52.004 EAL: Trying to obtain current memory policy. 00:05:52.004 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.004 EAL: Restoring previous memory policy: 4 00:05:52.004 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.004 EAL: request: mp_malloc_sync 00:05:52.004 EAL: No shared files mode enabled, IPC is disabled 00:05:52.005 EAL: Heap on socket 0 was expanded by 18MB 00:05:52.005 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.005 EAL: request: mp_malloc_sync 00:05:52.005 EAL: No shared files mode enabled, IPC is disabled 00:05:52.005 EAL: Heap on socket 0 was shrunk by 18MB 00:05:52.005 EAL: Trying to obtain current memory policy. 00:05:52.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.005 EAL: Restoring previous memory policy: 4 00:05:52.005 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.005 EAL: request: mp_malloc_sync 00:05:52.005 EAL: No shared files mode enabled, IPC is disabled 00:05:52.005 EAL: Heap on socket 0 was expanded by 34MB 00:05:52.005 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.005 EAL: request: mp_malloc_sync 00:05:52.005 EAL: No shared files mode enabled, IPC is disabled 00:05:52.005 EAL: Heap on socket 0 was shrunk by 34MB 00:05:52.005 EAL: Trying to obtain current memory policy. 00:05:52.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.005 EAL: Restoring previous memory policy: 4 00:05:52.005 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.005 EAL: request: mp_malloc_sync 00:05:52.005 EAL: No shared files mode enabled, IPC is disabled 00:05:52.005 EAL: Heap on socket 0 was expanded by 66MB 00:05:52.005 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.005 EAL: request: mp_malloc_sync 00:05:52.005 EAL: No shared files mode enabled, IPC is disabled 00:05:52.005 EAL: Heap on socket 0 was shrunk by 66MB 00:05:52.005 EAL: Trying to obtain current memory policy. 00:05:52.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.005 EAL: Restoring previous memory policy: 4 00:05:52.005 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.005 EAL: request: mp_malloc_sync 00:05:52.005 EAL: No shared files mode enabled, IPC is disabled 00:05:52.005 EAL: Heap on socket 0 was expanded by 130MB 00:05:52.005 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.005 EAL: request: mp_malloc_sync 00:05:52.005 EAL: No shared files mode enabled, IPC is disabled 00:05:52.005 EAL: Heap on socket 0 was shrunk by 130MB 00:05:52.005 EAL: Trying to obtain current memory policy. 00:05:52.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.005 EAL: Restoring previous memory policy: 4 00:05:52.005 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.005 EAL: request: mp_malloc_sync 00:05:52.005 EAL: No shared files mode enabled, IPC is disabled 00:05:52.005 EAL: Heap on socket 0 was expanded by 258MB 00:05:52.005 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.005 EAL: request: mp_malloc_sync 00:05:52.005 EAL: No shared files mode enabled, IPC is disabled 00:05:52.005 EAL: Heap on socket 0 was shrunk by 258MB 00:05:52.005 EAL: Trying to obtain current memory policy. 00:05:52.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.265 EAL: Restoring previous memory policy: 4 00:05:52.265 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.265 EAL: request: mp_malloc_sync 00:05:52.265 EAL: No shared files mode enabled, IPC is disabled 00:05:52.265 EAL: Heap on socket 0 was expanded by 514MB 00:05:52.265 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.265 EAL: request: mp_malloc_sync 00:05:52.265 EAL: No shared files mode enabled, IPC is disabled 00:05:52.265 EAL: Heap on socket 0 was shrunk by 514MB 00:05:52.265 EAL: Trying to obtain current memory policy. 00:05:52.265 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.526 EAL: Restoring previous memory policy: 4 00:05:52.526 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.526 EAL: request: mp_malloc_sync 00:05:52.526 EAL: No shared files mode enabled, IPC is disabled 00:05:52.526 EAL: Heap on socket 0 was expanded by 1026MB 00:05:52.526 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.526 EAL: request: mp_malloc_sync 00:05:52.526 EAL: No shared files mode enabled, IPC is disabled 00:05:52.526 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:52.526 passed 00:05:52.526 00:05:52.526 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.526 suites 1 1 n/a 0 0 00:05:52.526 tests 2 2 2 0 0 00:05:52.526 asserts 497 497 497 0 n/a 00:05:52.526 00:05:52.526 Elapsed time = 0.707 seconds 00:05:52.526 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.526 EAL: request: mp_malloc_sync 00:05:52.526 EAL: No shared files mode enabled, IPC is disabled 00:05:52.526 EAL: Heap on socket 0 was shrunk by 2MB 00:05:52.526 EAL: No shared files mode enabled, IPC is disabled 00:05:52.526 EAL: No shared files mode enabled, IPC is disabled 00:05:52.526 EAL: No shared files mode enabled, IPC is disabled 00:05:52.526 00:05:52.526 real 0m0.856s 00:05:52.526 user 0m0.439s 00:05:52.526 sys 0m0.390s 00:05:52.526 09:23:39 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.526 09:23:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:52.526 ************************************ 00:05:52.526 END TEST env_vtophys 00:05:52.526 ************************************ 00:05:52.787 09:23:39 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:52.787 09:23:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.787 09:23:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.787 09:23:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.787 ************************************ 00:05:52.787 START TEST env_pci 00:05:52.787 ************************************ 00:05:52.787 09:23:39 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:52.787 00:05:52.787 00:05:52.787 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.787 http://cunit.sourceforge.net/ 00:05:52.787 00:05:52.787 00:05:52.787 Suite: pci 00:05:52.787 Test: pci_hook ...[2024-11-19 09:23:39.340712] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 93021 has claimed it 00:05:52.787 EAL: Cannot find device (10000:00:01.0) 00:05:52.787 EAL: Failed to attach device on primary process 00:05:52.787 passed 00:05:52.787 00:05:52.787 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.787 suites 1 1 n/a 0 0 00:05:52.787 tests 1 1 1 0 0 00:05:52.787 asserts 25 25 25 0 n/a 00:05:52.787 00:05:52.787 Elapsed time = 0.031 seconds 00:05:52.787 00:05:52.787 real 0m0.053s 00:05:52.787 user 0m0.019s 00:05:52.787 sys 0m0.034s 00:05:52.788 09:23:39 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.788 09:23:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:52.788 ************************************ 00:05:52.788 END TEST env_pci 00:05:52.788 ************************************ 00:05:52.788 09:23:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:52.788 09:23:39 env -- env/env.sh@15 -- # uname 00:05:52.788 09:23:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:52.788 09:23:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:52.788 09:23:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.788 09:23:39 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:52.788 09:23:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.788 09:23:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.788 ************************************ 00:05:52.788 START TEST env_dpdk_post_init 00:05:52.788 ************************************ 00:05:52.788 09:23:39 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.788 EAL: Detected CPU lcores: 128 00:05:52.788 EAL: Detected NUMA nodes: 2 00:05:52.788 EAL: Detected shared linkage of DPDK 00:05:52.788 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.788 EAL: Selected IOVA mode 'VA' 00:05:52.788 EAL: VFIO support initialized 00:05:52.788 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:53.049 EAL: Using IOMMU type 1 (Type 1) 00:05:53.049 EAL: Ignore mapping IO port bar(1) 00:05:53.310 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:53.310 EAL: Ignore mapping IO port bar(1) 00:05:53.571 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:53.571 EAL: Ignore mapping IO port bar(1) 00:05:53.571 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:53.833 EAL: Ignore mapping IO port bar(1) 00:05:53.833 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:54.094 EAL: Ignore mapping IO port bar(1) 00:05:54.094 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:54.356 EAL: Ignore mapping IO port bar(1) 00:05:54.356 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:54.356 EAL: Ignore mapping IO port bar(1) 00:05:54.618 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:54.618 EAL: Ignore mapping IO port bar(1) 00:05:54.880 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:55.142 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:55.142 EAL: Ignore mapping IO port bar(1) 00:05:55.142 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:55.404 EAL: Ignore mapping IO port bar(1) 00:05:55.404 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:55.666 EAL: Ignore mapping IO port bar(1) 00:05:55.666 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:55.927 EAL: Ignore mapping IO port bar(1) 00:05:55.927 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:55.927 EAL: Ignore mapping IO port bar(1) 00:05:56.189 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:56.189 EAL: Ignore mapping IO port bar(1) 00:05:56.450 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:56.450 EAL: Ignore mapping IO port bar(1) 00:05:56.450 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:56.714 EAL: Ignore mapping IO port bar(1) 00:05:56.714 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:56.714 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:56.714 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:56.976 Starting DPDK initialization... 00:05:56.976 Starting SPDK post initialization... 00:05:56.976 SPDK NVMe probe 00:05:56.976 Attaching to 0000:65:00.0 00:05:56.976 Attached to 0000:65:00.0 00:05:56.976 Cleaning up... 00:05:58.896 00:05:58.896 real 0m5.748s 00:05:58.896 user 0m0.111s 00:05:58.896 sys 0m0.190s 00:05:58.896 09:23:45 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.896 09:23:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:58.896 ************************************ 00:05:58.896 END TEST env_dpdk_post_init 00:05:58.896 ************************************ 00:05:58.896 09:23:45 env -- env/env.sh@26 -- # uname 00:05:58.896 09:23:45 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:58.896 09:23:45 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:58.896 09:23:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.896 09:23:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.896 09:23:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.896 ************************************ 00:05:58.896 START TEST env_mem_callbacks 00:05:58.896 ************************************ 00:05:58.896 09:23:45 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:58.896 EAL: Detected CPU lcores: 128 00:05:58.896 EAL: Detected NUMA nodes: 2 00:05:58.896 EAL: Detected shared linkage of DPDK 00:05:58.896 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:58.896 EAL: Selected IOVA mode 'VA' 00:05:58.896 EAL: VFIO support initialized 00:05:58.896 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:58.896 00:05:58.896 00:05:58.896 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.896 http://cunit.sourceforge.net/ 00:05:58.896 00:05:58.896 00:05:58.896 Suite: memory 00:05:58.896 Test: test ... 00:05:58.896 register 0x200000200000 2097152 00:05:58.896 malloc 3145728 00:05:58.896 register 0x200000400000 4194304 00:05:58.896 buf 0x200000500000 len 3145728 PASSED 00:05:58.896 malloc 64 00:05:58.896 buf 0x2000004fff40 len 64 PASSED 00:05:58.896 malloc 4194304 00:05:58.896 register 0x200000800000 6291456 00:05:58.896 buf 0x200000a00000 len 4194304 PASSED 00:05:58.896 free 0x200000500000 3145728 00:05:58.896 free 0x2000004fff40 64 00:05:58.896 unregister 0x200000400000 4194304 PASSED 00:05:58.896 free 0x200000a00000 4194304 00:05:58.896 unregister 0x200000800000 6291456 PASSED 00:05:58.896 malloc 8388608 00:05:58.896 register 0x200000400000 10485760 00:05:58.896 buf 0x200000600000 len 8388608 PASSED 00:05:58.896 free 0x200000600000 8388608 00:05:58.896 unregister 0x200000400000 10485760 PASSED 00:05:58.896 passed 00:05:58.896 00:05:58.896 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.896 suites 1 1 n/a 0 0 00:05:58.896 tests 1 1 1 0 0 00:05:58.896 asserts 15 15 15 0 n/a 00:05:58.896 00:05:58.896 Elapsed time = 0.010 seconds 00:05:58.896 00:05:58.896 real 0m0.067s 00:05:58.896 user 0m0.027s 00:05:58.896 sys 0m0.040s 00:05:58.896 09:23:45 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.896 09:23:45 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:58.896 ************************************ 00:05:58.896 END TEST env_mem_callbacks 00:05:58.896 ************************************ 00:05:58.896 00:05:58.896 real 0m7.483s 00:05:58.896 user 0m0.985s 00:05:58.896 sys 0m1.052s 00:05:58.896 09:23:45 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.896 09:23:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.896 ************************************ 00:05:58.896 END TEST env 00:05:58.896 ************************************ 00:05:58.896 09:23:45 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:58.896 09:23:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.896 09:23:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.896 09:23:45 -- common/autotest_common.sh@10 -- # set +x 00:05:58.896 ************************************ 00:05:58.896 START TEST rpc 00:05:58.896 ************************************ 00:05:58.896 09:23:45 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:58.896 * Looking for test storage... 00:05:58.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:58.896 09:23:45 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.896 09:23:45 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.896 09:23:45 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:59.158 09:23:45 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:59.158 09:23:45 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.158 09:23:45 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.158 09:23:45 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.158 09:23:45 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.158 09:23:45 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.158 09:23:45 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.158 09:23:45 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.158 09:23:45 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.158 09:23:45 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.158 09:23:45 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.158 09:23:45 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.158 09:23:45 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:59.158 09:23:45 rpc -- scripts/common.sh@345 -- # : 1 00:05:59.158 09:23:45 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.158 09:23:45 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.158 09:23:45 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:59.158 09:23:45 rpc -- scripts/common.sh@353 -- # local d=1 00:05:59.158 09:23:45 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.158 09:23:45 rpc -- scripts/common.sh@355 -- # echo 1 00:05:59.158 09:23:45 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.158 09:23:45 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:59.158 09:23:45 rpc -- scripts/common.sh@353 -- # local d=2 00:05:59.158 09:23:45 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.158 09:23:45 rpc -- scripts/common.sh@355 -- # echo 2 00:05:59.158 09:23:45 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.158 09:23:45 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.158 09:23:45 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.158 09:23:45 rpc -- scripts/common.sh@368 -- # return 0 00:05:59.158 09:23:45 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.158 09:23:45 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:59.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.159 --rc genhtml_branch_coverage=1 00:05:59.159 --rc genhtml_function_coverage=1 00:05:59.159 --rc genhtml_legend=1 00:05:59.159 --rc geninfo_all_blocks=1 00:05:59.159 --rc geninfo_unexecuted_blocks=1 00:05:59.159 00:05:59.159 ' 00:05:59.159 09:23:45 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:59.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.159 --rc genhtml_branch_coverage=1 00:05:59.159 --rc genhtml_function_coverage=1 00:05:59.159 --rc genhtml_legend=1 00:05:59.159 --rc geninfo_all_blocks=1 00:05:59.159 --rc geninfo_unexecuted_blocks=1 00:05:59.159 00:05:59.159 ' 00:05:59.159 09:23:45 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:59.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.159 --rc genhtml_branch_coverage=1 00:05:59.159 --rc genhtml_function_coverage=1 00:05:59.159 --rc genhtml_legend=1 00:05:59.159 --rc geninfo_all_blocks=1 00:05:59.159 --rc geninfo_unexecuted_blocks=1 00:05:59.159 00:05:59.159 ' 00:05:59.159 09:23:45 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:59.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.159 --rc genhtml_branch_coverage=1 00:05:59.159 --rc genhtml_function_coverage=1 00:05:59.159 --rc genhtml_legend=1 00:05:59.159 --rc geninfo_all_blocks=1 00:05:59.159 --rc geninfo_unexecuted_blocks=1 00:05:59.159 00:05:59.159 ' 00:05:59.159 09:23:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=94375 00:05:59.159 09:23:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.159 09:23:45 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:59.159 09:23:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 94375 00:05:59.159 09:23:45 rpc -- common/autotest_common.sh@835 -- # '[' -z 94375 ']' 00:05:59.159 09:23:45 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.159 09:23:45 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.159 09:23:45 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.159 09:23:45 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.159 09:23:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.159 [2024-11-19 09:23:45.750363] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:59.159 [2024-11-19 09:23:45.750416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94375 ] 00:05:59.159 [2024-11-19 09:23:45.834996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.159 [2024-11-19 09:23:45.887067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:59.159 [2024-11-19 09:23:45.887117] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 94375' to capture a snapshot of events at runtime. 00:05:59.159 [2024-11-19 09:23:45.887126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:59.159 [2024-11-19 09:23:45.887134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:59.159 [2024-11-19 09:23:45.887140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid94375 for offline analysis/debug. 00:05:59.159 [2024-11-19 09:23:45.887879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.106 09:23:46 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.106 09:23:46 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:00.106 09:23:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:00.106 09:23:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:00.106 09:23:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:00.106 09:23:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:00.106 09:23:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.106 09:23:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.106 09:23:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.106 ************************************ 00:06:00.106 START TEST rpc_integrity 00:06:00.106 ************************************ 00:06:00.106 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:00.106 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:00.106 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.106 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.106 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.106 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:00.106 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:00.106 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:00.106 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:00.106 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.106 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.106 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.106 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:00.106 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:00.106 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.106 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.106 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.107 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:00.107 { 00:06:00.107 "name": "Malloc0", 00:06:00.107 "aliases": [ 00:06:00.107 "a1b2bffd-ba35-4cba-b9b4-43141942d3a5" 00:06:00.107 ], 00:06:00.107 "product_name": "Malloc disk", 00:06:00.107 "block_size": 512, 00:06:00.107 "num_blocks": 16384, 00:06:00.107 "uuid": "a1b2bffd-ba35-4cba-b9b4-43141942d3a5", 00:06:00.107 "assigned_rate_limits": { 00:06:00.107 "rw_ios_per_sec": 0, 00:06:00.107 "rw_mbytes_per_sec": 0, 00:06:00.107 "r_mbytes_per_sec": 0, 00:06:00.107 "w_mbytes_per_sec": 0 00:06:00.107 }, 00:06:00.107 "claimed": false, 00:06:00.107 "zoned": false, 00:06:00.107 "supported_io_types": { 00:06:00.107 "read": true, 00:06:00.107 "write": true, 00:06:00.107 "unmap": true, 00:06:00.107 "flush": true, 00:06:00.107 "reset": true, 00:06:00.107 "nvme_admin": false, 00:06:00.107 "nvme_io": false, 00:06:00.107 "nvme_io_md": false, 00:06:00.107 "write_zeroes": true, 00:06:00.107 "zcopy": true, 00:06:00.107 "get_zone_info": false, 00:06:00.107 "zone_management": false, 00:06:00.107 "zone_append": false, 00:06:00.107 "compare": false, 00:06:00.107 "compare_and_write": false, 00:06:00.107 "abort": true, 00:06:00.107 "seek_hole": false, 00:06:00.107 "seek_data": false, 00:06:00.107 "copy": true, 00:06:00.107 "nvme_iov_md": false 00:06:00.107 }, 00:06:00.107 "memory_domains": [ 00:06:00.107 { 00:06:00.107 "dma_device_id": "system", 00:06:00.107 "dma_device_type": 1 00:06:00.107 }, 00:06:00.107 { 00:06:00.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.107 "dma_device_type": 2 00:06:00.107 } 00:06:00.107 ], 00:06:00.107 "driver_specific": {} 00:06:00.107 } 00:06:00.107 ]' 00:06:00.107 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:00.107 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:00.107 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:00.107 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.107 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.107 [2024-11-19 09:23:46.733836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:00.107 [2024-11-19 09:23:46.733880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:00.107 [2024-11-19 09:23:46.733896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22c7db0 00:06:00.107 [2024-11-19 09:23:46.733904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:00.107 [2024-11-19 09:23:46.735460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:00.107 [2024-11-19 09:23:46.735494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:00.107 Passthru0 00:06:00.107 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.107 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:00.107 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.107 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.107 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.107 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:00.107 { 00:06:00.107 "name": "Malloc0", 00:06:00.107 "aliases": [ 00:06:00.107 "a1b2bffd-ba35-4cba-b9b4-43141942d3a5" 00:06:00.107 ], 00:06:00.107 "product_name": "Malloc disk", 00:06:00.107 "block_size": 512, 00:06:00.107 "num_blocks": 16384, 00:06:00.107 "uuid": "a1b2bffd-ba35-4cba-b9b4-43141942d3a5", 00:06:00.107 "assigned_rate_limits": { 00:06:00.107 "rw_ios_per_sec": 0, 00:06:00.107 "rw_mbytes_per_sec": 0, 00:06:00.107 "r_mbytes_per_sec": 0, 00:06:00.107 "w_mbytes_per_sec": 0 00:06:00.107 }, 00:06:00.107 "claimed": true, 00:06:00.107 "claim_type": "exclusive_write", 00:06:00.107 "zoned": false, 00:06:00.107 "supported_io_types": { 00:06:00.107 "read": true, 00:06:00.107 "write": true, 00:06:00.107 "unmap": true, 00:06:00.107 "flush": true, 00:06:00.107 "reset": true, 00:06:00.107 "nvme_admin": false, 00:06:00.107 "nvme_io": false, 00:06:00.107 "nvme_io_md": false, 00:06:00.107 "write_zeroes": true, 00:06:00.107 "zcopy": true, 00:06:00.107 "get_zone_info": false, 00:06:00.107 "zone_management": false, 00:06:00.107 "zone_append": false, 00:06:00.107 "compare": false, 00:06:00.107 "compare_and_write": false, 00:06:00.107 "abort": true, 00:06:00.107 "seek_hole": false, 00:06:00.107 "seek_data": false, 00:06:00.107 "copy": true, 00:06:00.107 "nvme_iov_md": false 00:06:00.107 }, 00:06:00.107 "memory_domains": [ 00:06:00.107 { 00:06:00.107 "dma_device_id": "system", 00:06:00.107 "dma_device_type": 1 00:06:00.107 }, 00:06:00.107 { 00:06:00.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.107 "dma_device_type": 2 00:06:00.107 } 00:06:00.107 ], 00:06:00.107 "driver_specific": {} 00:06:00.107 }, 00:06:00.107 { 00:06:00.107 "name": "Passthru0", 00:06:00.107 "aliases": [ 00:06:00.107 "ec23e418-1997-54d8-a1e8-7fd5f28fae48" 00:06:00.107 ], 00:06:00.107 "product_name": "passthru", 00:06:00.107 "block_size": 512, 00:06:00.107 "num_blocks": 16384, 00:06:00.107 "uuid": "ec23e418-1997-54d8-a1e8-7fd5f28fae48", 00:06:00.107 "assigned_rate_limits": { 00:06:00.107 "rw_ios_per_sec": 0, 00:06:00.107 "rw_mbytes_per_sec": 0, 00:06:00.107 "r_mbytes_per_sec": 0, 00:06:00.107 "w_mbytes_per_sec": 0 00:06:00.107 }, 00:06:00.107 "claimed": false, 00:06:00.107 "zoned": false, 00:06:00.107 "supported_io_types": { 00:06:00.107 "read": true, 00:06:00.107 "write": true, 00:06:00.107 "unmap": true, 00:06:00.107 "flush": true, 00:06:00.107 "reset": true, 00:06:00.107 "nvme_admin": false, 00:06:00.107 "nvme_io": false, 00:06:00.107 "nvme_io_md": false, 00:06:00.107 "write_zeroes": true, 00:06:00.107 "zcopy": true, 00:06:00.107 "get_zone_info": false, 00:06:00.107 "zone_management": false, 00:06:00.107 "zone_append": false, 00:06:00.107 "compare": false, 00:06:00.107 "compare_and_write": false, 00:06:00.107 "abort": true, 00:06:00.107 "seek_hole": false, 00:06:00.107 "seek_data": false, 00:06:00.107 "copy": true, 00:06:00.107 "nvme_iov_md": false 00:06:00.107 }, 00:06:00.107 "memory_domains": [ 00:06:00.107 { 00:06:00.107 "dma_device_id": "system", 00:06:00.107 "dma_device_type": 1 00:06:00.107 }, 00:06:00.107 { 00:06:00.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.107 "dma_device_type": 2 00:06:00.107 } 00:06:00.107 ], 00:06:00.107 "driver_specific": { 00:06:00.107 "passthru": { 00:06:00.107 "name": "Passthru0", 00:06:00.107 "base_bdev_name": "Malloc0" 00:06:00.107 } 00:06:00.107 } 00:06:00.107 } 00:06:00.107 ]' 00:06:00.107 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:00.107 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:00.107 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:00.107 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.107 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.107 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.107 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:00.107 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.107 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.107 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.107 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:00.107 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.107 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.369 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.369 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:00.369 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:00.369 09:23:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:00.369 00:06:00.369 real 0m0.309s 00:06:00.369 user 0m0.202s 00:06:00.369 sys 0m0.039s 00:06:00.369 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.369 09:23:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.369 ************************************ 00:06:00.369 END TEST rpc_integrity 00:06:00.369 ************************************ 00:06:00.369 09:23:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:00.369 09:23:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.369 09:23:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.369 09:23:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.369 ************************************ 00:06:00.369 START TEST rpc_plugins 00:06:00.369 ************************************ 00:06:00.369 09:23:46 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:00.369 09:23:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:00.369 09:23:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.369 09:23:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.369 09:23:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.369 09:23:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:00.369 09:23:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:00.369 09:23:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.369 09:23:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.369 09:23:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.369 09:23:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:00.369 { 00:06:00.369 "name": "Malloc1", 00:06:00.369 "aliases": [ 00:06:00.369 "0efaa2cd-9351-4ab1-bdba-10cc52918a7f" 00:06:00.369 ], 00:06:00.370 "product_name": "Malloc disk", 00:06:00.370 "block_size": 4096, 00:06:00.370 "num_blocks": 256, 00:06:00.370 "uuid": "0efaa2cd-9351-4ab1-bdba-10cc52918a7f", 00:06:00.370 "assigned_rate_limits": { 00:06:00.370 "rw_ios_per_sec": 0, 00:06:00.370 "rw_mbytes_per_sec": 0, 00:06:00.370 "r_mbytes_per_sec": 0, 00:06:00.370 "w_mbytes_per_sec": 0 00:06:00.370 }, 00:06:00.370 "claimed": false, 00:06:00.370 "zoned": false, 00:06:00.370 "supported_io_types": { 00:06:00.370 "read": true, 00:06:00.370 "write": true, 00:06:00.370 "unmap": true, 00:06:00.370 "flush": true, 00:06:00.370 "reset": true, 00:06:00.370 "nvme_admin": false, 00:06:00.370 "nvme_io": false, 00:06:00.370 "nvme_io_md": false, 00:06:00.370 "write_zeroes": true, 00:06:00.370 "zcopy": true, 00:06:00.370 "get_zone_info": false, 00:06:00.370 "zone_management": false, 00:06:00.370 "zone_append": false, 00:06:00.370 "compare": false, 00:06:00.370 "compare_and_write": false, 00:06:00.370 "abort": true, 00:06:00.370 "seek_hole": false, 00:06:00.370 "seek_data": false, 00:06:00.370 "copy": true, 00:06:00.370 "nvme_iov_md": false 00:06:00.370 }, 00:06:00.370 "memory_domains": [ 00:06:00.370 { 00:06:00.370 "dma_device_id": "system", 00:06:00.370 "dma_device_type": 1 00:06:00.370 }, 00:06:00.370 { 00:06:00.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.370 "dma_device_type": 2 00:06:00.370 } 00:06:00.370 ], 00:06:00.370 "driver_specific": {} 00:06:00.370 } 00:06:00.370 ]' 00:06:00.370 09:23:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:00.370 09:23:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:00.370 09:23:47 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:00.370 09:23:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.370 09:23:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.370 09:23:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.370 09:23:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:00.370 09:23:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.370 09:23:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.370 09:23:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.370 09:23:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:00.370 09:23:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:00.631 09:23:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:00.631 00:06:00.631 real 0m0.156s 00:06:00.631 user 0m0.094s 00:06:00.631 sys 0m0.023s 00:06:00.631 09:23:47 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.631 09:23:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.631 ************************************ 00:06:00.631 END TEST rpc_plugins 00:06:00.631 ************************************ 00:06:00.631 09:23:47 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:00.631 09:23:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.631 09:23:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.631 09:23:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.631 ************************************ 00:06:00.631 START TEST rpc_trace_cmd_test 00:06:00.631 ************************************ 00:06:00.631 09:23:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:00.631 09:23:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:00.631 09:23:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:00.631 09:23:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.631 09:23:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.631 09:23:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.631 09:23:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:00.631 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid94375", 00:06:00.631 "tpoint_group_mask": "0x8", 00:06:00.631 "iscsi_conn": { 00:06:00.631 "mask": "0x2", 00:06:00.631 "tpoint_mask": "0x0" 00:06:00.631 }, 00:06:00.631 "scsi": { 00:06:00.631 "mask": "0x4", 00:06:00.631 "tpoint_mask": "0x0" 00:06:00.631 }, 00:06:00.631 "bdev": { 00:06:00.631 "mask": "0x8", 00:06:00.631 "tpoint_mask": "0xffffffffffffffff" 00:06:00.631 }, 00:06:00.631 "nvmf_rdma": { 00:06:00.631 "mask": "0x10", 00:06:00.631 "tpoint_mask": "0x0" 00:06:00.631 }, 00:06:00.631 "nvmf_tcp": { 00:06:00.631 "mask": "0x20", 00:06:00.631 "tpoint_mask": "0x0" 00:06:00.631 }, 00:06:00.631 "ftl": { 00:06:00.631 "mask": "0x40", 00:06:00.631 "tpoint_mask": "0x0" 00:06:00.631 }, 00:06:00.631 "blobfs": { 00:06:00.631 "mask": "0x80", 00:06:00.631 "tpoint_mask": "0x0" 00:06:00.631 }, 00:06:00.631 "dsa": { 00:06:00.631 "mask": "0x200", 00:06:00.631 "tpoint_mask": "0x0" 00:06:00.631 }, 00:06:00.631 "thread": { 00:06:00.631 "mask": "0x400", 00:06:00.631 "tpoint_mask": "0x0" 00:06:00.631 }, 00:06:00.631 "nvme_pcie": { 00:06:00.631 "mask": "0x800", 00:06:00.631 "tpoint_mask": "0x0" 00:06:00.631 }, 00:06:00.631 "iaa": { 00:06:00.631 "mask": "0x1000", 00:06:00.631 "tpoint_mask": "0x0" 00:06:00.631 }, 00:06:00.631 "nvme_tcp": { 00:06:00.631 "mask": "0x2000", 00:06:00.631 "tpoint_mask": "0x0" 00:06:00.631 }, 00:06:00.631 "bdev_nvme": { 00:06:00.631 "mask": "0x4000", 00:06:00.631 "tpoint_mask": "0x0" 00:06:00.631 }, 00:06:00.631 "sock": { 00:06:00.631 "mask": "0x8000", 00:06:00.631 "tpoint_mask": "0x0" 00:06:00.631 }, 00:06:00.631 "blob": { 00:06:00.631 "mask": "0x10000", 00:06:00.632 "tpoint_mask": "0x0" 00:06:00.632 }, 00:06:00.632 "bdev_raid": { 00:06:00.632 "mask": "0x20000", 00:06:00.632 "tpoint_mask": "0x0" 00:06:00.632 }, 00:06:00.632 "scheduler": { 00:06:00.632 "mask": "0x40000", 00:06:00.632 "tpoint_mask": "0x0" 00:06:00.632 } 00:06:00.632 }' 00:06:00.632 09:23:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:00.632 09:23:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:00.632 09:23:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:00.632 09:23:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:00.632 09:23:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:00.893 09:23:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:00.893 09:23:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:00.893 09:23:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:00.893 09:23:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:00.893 09:23:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:00.893 00:06:00.893 real 0m0.252s 00:06:00.893 user 0m0.214s 00:06:00.893 sys 0m0.029s 00:06:00.893 09:23:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.893 09:23:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.893 ************************************ 00:06:00.893 END TEST rpc_trace_cmd_test 00:06:00.893 ************************************ 00:06:00.893 09:23:47 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:00.893 09:23:47 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:00.893 09:23:47 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:00.893 09:23:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.893 09:23:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.893 09:23:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.893 ************************************ 00:06:00.893 START TEST rpc_daemon_integrity 00:06:00.893 ************************************ 00:06:00.893 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:00.893 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:00.893 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.893 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.893 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.893 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:00.893 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:00.893 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:00.893 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:00.893 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.893 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.893 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.893 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:00.893 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:00.893 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.893 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.155 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.155 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:01.155 { 00:06:01.155 "name": "Malloc2", 00:06:01.155 "aliases": [ 00:06:01.155 "308bcf71-fa10-46ae-ae2a-4d428fc3a6be" 00:06:01.155 ], 00:06:01.155 "product_name": "Malloc disk", 00:06:01.155 "block_size": 512, 00:06:01.155 "num_blocks": 16384, 00:06:01.155 "uuid": "308bcf71-fa10-46ae-ae2a-4d428fc3a6be", 00:06:01.155 "assigned_rate_limits": { 00:06:01.155 "rw_ios_per_sec": 0, 00:06:01.155 "rw_mbytes_per_sec": 0, 00:06:01.155 "r_mbytes_per_sec": 0, 00:06:01.155 "w_mbytes_per_sec": 0 00:06:01.155 }, 00:06:01.155 "claimed": false, 00:06:01.155 "zoned": false, 00:06:01.155 "supported_io_types": { 00:06:01.155 "read": true, 00:06:01.155 "write": true, 00:06:01.155 "unmap": true, 00:06:01.155 "flush": true, 00:06:01.155 "reset": true, 00:06:01.155 "nvme_admin": false, 00:06:01.155 "nvme_io": false, 00:06:01.155 "nvme_io_md": false, 00:06:01.155 "write_zeroes": true, 00:06:01.155 "zcopy": true, 00:06:01.155 "get_zone_info": false, 00:06:01.155 "zone_management": false, 00:06:01.155 "zone_append": false, 00:06:01.155 "compare": false, 00:06:01.155 "compare_and_write": false, 00:06:01.155 "abort": true, 00:06:01.155 "seek_hole": false, 00:06:01.155 "seek_data": false, 00:06:01.155 "copy": true, 00:06:01.155 "nvme_iov_md": false 00:06:01.155 }, 00:06:01.155 "memory_domains": [ 00:06:01.155 { 00:06:01.155 "dma_device_id": "system", 00:06:01.155 "dma_device_type": 1 00:06:01.155 }, 00:06:01.155 { 00:06:01.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.155 "dma_device_type": 2 00:06:01.155 } 00:06:01.155 ], 00:06:01.155 "driver_specific": {} 00:06:01.155 } 00:06:01.155 ]' 00:06:01.155 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:01.155 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:01.155 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:01.155 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.155 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.155 [2024-11-19 09:23:47.696455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:01.155 [2024-11-19 09:23:47.696495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:01.155 [2024-11-19 09:23:47.696509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23f88d0 00:06:01.155 [2024-11-19 09:23:47.696516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:01.155 [2024-11-19 09:23:47.697964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:01.155 [2024-11-19 09:23:47.697999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:01.155 Passthru0 00:06:01.155 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.155 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:01.155 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.155 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.155 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.155 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:01.155 { 00:06:01.155 "name": "Malloc2", 00:06:01.155 "aliases": [ 00:06:01.155 "308bcf71-fa10-46ae-ae2a-4d428fc3a6be" 00:06:01.155 ], 00:06:01.155 "product_name": "Malloc disk", 00:06:01.155 "block_size": 512, 00:06:01.155 "num_blocks": 16384, 00:06:01.155 "uuid": "308bcf71-fa10-46ae-ae2a-4d428fc3a6be", 00:06:01.155 "assigned_rate_limits": { 00:06:01.155 "rw_ios_per_sec": 0, 00:06:01.155 "rw_mbytes_per_sec": 0, 00:06:01.155 "r_mbytes_per_sec": 0, 00:06:01.155 "w_mbytes_per_sec": 0 00:06:01.155 }, 00:06:01.155 "claimed": true, 00:06:01.155 "claim_type": "exclusive_write", 00:06:01.155 "zoned": false, 00:06:01.155 "supported_io_types": { 00:06:01.155 "read": true, 00:06:01.155 "write": true, 00:06:01.155 "unmap": true, 00:06:01.155 "flush": true, 00:06:01.155 "reset": true, 00:06:01.155 "nvme_admin": false, 00:06:01.155 "nvme_io": false, 00:06:01.155 "nvme_io_md": false, 00:06:01.155 "write_zeroes": true, 00:06:01.155 "zcopy": true, 00:06:01.155 "get_zone_info": false, 00:06:01.155 "zone_management": false, 00:06:01.155 "zone_append": false, 00:06:01.155 "compare": false, 00:06:01.155 "compare_and_write": false, 00:06:01.155 "abort": true, 00:06:01.155 "seek_hole": false, 00:06:01.155 "seek_data": false, 00:06:01.155 "copy": true, 00:06:01.155 "nvme_iov_md": false 00:06:01.155 }, 00:06:01.155 "memory_domains": [ 00:06:01.155 { 00:06:01.155 "dma_device_id": "system", 00:06:01.155 "dma_device_type": 1 00:06:01.155 }, 00:06:01.155 { 00:06:01.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.155 "dma_device_type": 2 00:06:01.155 } 00:06:01.155 ], 00:06:01.155 "driver_specific": {} 00:06:01.155 }, 00:06:01.155 { 00:06:01.155 "name": "Passthru0", 00:06:01.155 "aliases": [ 00:06:01.155 "c95b3110-a548-592b-bbbc-247d6ed794b6" 00:06:01.155 ], 00:06:01.155 "product_name": "passthru", 00:06:01.155 "block_size": 512, 00:06:01.155 "num_blocks": 16384, 00:06:01.155 "uuid": "c95b3110-a548-592b-bbbc-247d6ed794b6", 00:06:01.155 "assigned_rate_limits": { 00:06:01.155 "rw_ios_per_sec": 0, 00:06:01.155 "rw_mbytes_per_sec": 0, 00:06:01.155 "r_mbytes_per_sec": 0, 00:06:01.155 "w_mbytes_per_sec": 0 00:06:01.155 }, 00:06:01.155 "claimed": false, 00:06:01.155 "zoned": false, 00:06:01.155 "supported_io_types": { 00:06:01.155 "read": true, 00:06:01.155 "write": true, 00:06:01.155 "unmap": true, 00:06:01.155 "flush": true, 00:06:01.155 "reset": true, 00:06:01.155 "nvme_admin": false, 00:06:01.155 "nvme_io": false, 00:06:01.155 "nvme_io_md": false, 00:06:01.155 "write_zeroes": true, 00:06:01.155 "zcopy": true, 00:06:01.155 "get_zone_info": false, 00:06:01.155 "zone_management": false, 00:06:01.155 "zone_append": false, 00:06:01.155 "compare": false, 00:06:01.155 "compare_and_write": false, 00:06:01.155 "abort": true, 00:06:01.155 "seek_hole": false, 00:06:01.155 "seek_data": false, 00:06:01.155 "copy": true, 00:06:01.156 "nvme_iov_md": false 00:06:01.156 }, 00:06:01.156 "memory_domains": [ 00:06:01.156 { 00:06:01.156 "dma_device_id": "system", 00:06:01.156 "dma_device_type": 1 00:06:01.156 }, 00:06:01.156 { 00:06:01.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.156 "dma_device_type": 2 00:06:01.156 } 00:06:01.156 ], 00:06:01.156 "driver_specific": { 00:06:01.156 "passthru": { 00:06:01.156 "name": "Passthru0", 00:06:01.156 "base_bdev_name": "Malloc2" 00:06:01.156 } 00:06:01.156 } 00:06:01.156 } 00:06:01.156 ]' 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:01.156 00:06:01.156 real 0m0.301s 00:06:01.156 user 0m0.175s 00:06:01.156 sys 0m0.052s 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.156 09:23:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.156 ************************************ 00:06:01.156 END TEST rpc_daemon_integrity 00:06:01.156 ************************************ 00:06:01.156 09:23:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:01.156 09:23:47 rpc -- rpc/rpc.sh@84 -- # killprocess 94375 00:06:01.156 09:23:47 rpc -- common/autotest_common.sh@954 -- # '[' -z 94375 ']' 00:06:01.156 09:23:47 rpc -- common/autotest_common.sh@958 -- # kill -0 94375 00:06:01.418 09:23:47 rpc -- common/autotest_common.sh@959 -- # uname 00:06:01.418 09:23:47 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.418 09:23:47 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94375 00:06:01.418 09:23:47 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.418 09:23:47 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.418 09:23:47 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94375' 00:06:01.418 killing process with pid 94375 00:06:01.418 09:23:47 rpc -- common/autotest_common.sh@973 -- # kill 94375 00:06:01.418 09:23:47 rpc -- common/autotest_common.sh@978 -- # wait 94375 00:06:01.679 00:06:01.679 real 0m2.717s 00:06:01.679 user 0m3.430s 00:06:01.679 sys 0m0.859s 00:06:01.679 09:23:48 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.679 09:23:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.679 ************************************ 00:06:01.679 END TEST rpc 00:06:01.679 ************************************ 00:06:01.679 09:23:48 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:01.679 09:23:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.679 09:23:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.679 09:23:48 -- common/autotest_common.sh@10 -- # set +x 00:06:01.679 ************************************ 00:06:01.679 START TEST skip_rpc 00:06:01.679 ************************************ 00:06:01.679 09:23:48 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:01.679 * Looking for test storage... 00:06:01.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:01.679 09:23:48 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:01.680 09:23:48 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:01.680 09:23:48 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.942 09:23:48 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.942 09:23:48 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:01.942 09:23:48 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.942 09:23:48 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.942 --rc genhtml_branch_coverage=1 00:06:01.942 --rc genhtml_function_coverage=1 00:06:01.942 --rc genhtml_legend=1 00:06:01.942 --rc geninfo_all_blocks=1 00:06:01.942 --rc geninfo_unexecuted_blocks=1 00:06:01.942 00:06:01.942 ' 00:06:01.942 09:23:48 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.942 --rc genhtml_branch_coverage=1 00:06:01.942 --rc genhtml_function_coverage=1 00:06:01.942 --rc genhtml_legend=1 00:06:01.942 --rc geninfo_all_blocks=1 00:06:01.942 --rc geninfo_unexecuted_blocks=1 00:06:01.942 00:06:01.942 ' 00:06:01.942 09:23:48 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.942 --rc genhtml_branch_coverage=1 00:06:01.942 --rc genhtml_function_coverage=1 00:06:01.942 --rc genhtml_legend=1 00:06:01.942 --rc geninfo_all_blocks=1 00:06:01.942 --rc geninfo_unexecuted_blocks=1 00:06:01.942 00:06:01.942 ' 00:06:01.942 09:23:48 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.942 --rc genhtml_branch_coverage=1 00:06:01.942 --rc genhtml_function_coverage=1 00:06:01.942 --rc genhtml_legend=1 00:06:01.942 --rc geninfo_all_blocks=1 00:06:01.942 --rc geninfo_unexecuted_blocks=1 00:06:01.942 00:06:01.942 ' 00:06:01.942 09:23:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:01.942 09:23:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:01.942 09:23:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:01.942 09:23:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.942 09:23:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.942 09:23:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.942 ************************************ 00:06:01.942 START TEST skip_rpc 00:06:01.942 ************************************ 00:06:01.942 09:23:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:01.942 09:23:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=95219 00:06:01.942 09:23:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.942 09:23:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:01.942 09:23:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:01.942 [2024-11-19 09:23:48.590008] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:01.942 [2024-11-19 09:23:48.590067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95219 ] 00:06:01.942 [2024-11-19 09:23:48.681715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.204 [2024-11-19 09:23:48.736297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 95219 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 95219 ']' 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 95219 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95219 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95219' 00:06:07.500 killing process with pid 95219 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 95219 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 95219 00:06:07.500 00:06:07.500 real 0m5.263s 00:06:07.500 user 0m5.008s 00:06:07.500 sys 0m0.305s 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.500 09:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.500 ************************************ 00:06:07.500 END TEST skip_rpc 00:06:07.500 ************************************ 00:06:07.500 09:23:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:07.500 09:23:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.500 09:23:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.500 09:23:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.500 ************************************ 00:06:07.500 START TEST skip_rpc_with_json 00:06:07.500 ************************************ 00:06:07.500 09:23:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:07.500 09:23:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:07.500 09:23:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=96264 00:06:07.500 09:23:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.500 09:23:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 96264 00:06:07.500 09:23:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.500 09:23:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 96264 ']' 00:06:07.500 09:23:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.500 09:23:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.500 09:23:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.500 09:23:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.500 09:23:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:07.500 [2024-11-19 09:23:53.934146] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:07.500 [2024-11-19 09:23:53.934206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96264 ] 00:06:07.500 [2024-11-19 09:23:54.018844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.500 [2024-11-19 09:23:54.050003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.073 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.073 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:08.073 09:23:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:08.073 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.073 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.073 [2024-11-19 09:23:54.707044] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:08.073 request: 00:06:08.073 { 00:06:08.073 "trtype": "tcp", 00:06:08.073 "method": "nvmf_get_transports", 00:06:08.073 "req_id": 1 00:06:08.073 } 00:06:08.073 Got JSON-RPC error response 00:06:08.073 response: 00:06:08.073 { 00:06:08.073 "code": -19, 00:06:08.073 "message": "No such device" 00:06:08.073 } 00:06:08.073 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:08.073 09:23:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:08.073 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.073 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.073 [2024-11-19 09:23:54.719139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.073 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.073 09:23:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:08.073 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.073 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.335 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.335 09:23:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:08.335 { 00:06:08.335 "subsystems": [ 00:06:08.335 { 00:06:08.335 "subsystem": "fsdev", 00:06:08.335 "config": [ 00:06:08.335 { 00:06:08.335 "method": "fsdev_set_opts", 00:06:08.335 "params": { 00:06:08.335 "fsdev_io_pool_size": 65535, 00:06:08.335 "fsdev_io_cache_size": 256 00:06:08.335 } 00:06:08.335 } 00:06:08.335 ] 00:06:08.335 }, 00:06:08.335 { 00:06:08.335 "subsystem": "vfio_user_target", 00:06:08.335 "config": null 00:06:08.335 }, 00:06:08.335 { 00:06:08.335 "subsystem": "keyring", 00:06:08.335 "config": [] 00:06:08.335 }, 00:06:08.335 { 00:06:08.335 "subsystem": "iobuf", 00:06:08.335 "config": [ 00:06:08.335 { 00:06:08.335 "method": "iobuf_set_options", 00:06:08.335 "params": { 00:06:08.335 "small_pool_count": 8192, 00:06:08.335 "large_pool_count": 1024, 00:06:08.335 "small_bufsize": 8192, 00:06:08.335 "large_bufsize": 135168, 00:06:08.335 "enable_numa": false 00:06:08.335 } 00:06:08.335 } 00:06:08.335 ] 00:06:08.335 }, 00:06:08.335 { 00:06:08.335 "subsystem": "sock", 00:06:08.335 "config": [ 00:06:08.335 { 00:06:08.335 "method": "sock_set_default_impl", 00:06:08.335 "params": { 00:06:08.335 "impl_name": "posix" 00:06:08.335 } 00:06:08.335 }, 00:06:08.335 { 00:06:08.335 "method": "sock_impl_set_options", 00:06:08.335 "params": { 00:06:08.335 "impl_name": "ssl", 00:06:08.335 "recv_buf_size": 4096, 00:06:08.335 "send_buf_size": 4096, 00:06:08.335 "enable_recv_pipe": true, 00:06:08.335 "enable_quickack": false, 00:06:08.335 "enable_placement_id": 0, 00:06:08.335 "enable_zerocopy_send_server": true, 00:06:08.335 "enable_zerocopy_send_client": false, 00:06:08.335 "zerocopy_threshold": 0, 00:06:08.335 "tls_version": 0, 00:06:08.335 "enable_ktls": false 00:06:08.335 } 00:06:08.335 }, 00:06:08.335 { 00:06:08.335 "method": "sock_impl_set_options", 00:06:08.335 "params": { 00:06:08.335 "impl_name": "posix", 00:06:08.335 "recv_buf_size": 2097152, 00:06:08.335 "send_buf_size": 2097152, 00:06:08.335 "enable_recv_pipe": true, 00:06:08.335 "enable_quickack": false, 00:06:08.335 "enable_placement_id": 0, 00:06:08.335 "enable_zerocopy_send_server": true, 00:06:08.335 "enable_zerocopy_send_client": false, 00:06:08.335 "zerocopy_threshold": 0, 00:06:08.335 "tls_version": 0, 00:06:08.335 "enable_ktls": false 00:06:08.335 } 00:06:08.335 } 00:06:08.335 ] 00:06:08.335 }, 00:06:08.335 { 00:06:08.335 "subsystem": "vmd", 00:06:08.335 "config": [] 00:06:08.335 }, 00:06:08.335 { 00:06:08.335 "subsystem": "accel", 00:06:08.335 "config": [ 00:06:08.335 { 00:06:08.335 "method": "accel_set_options", 00:06:08.335 "params": { 00:06:08.335 "small_cache_size": 128, 00:06:08.335 "large_cache_size": 16, 00:06:08.335 "task_count": 2048, 00:06:08.335 "sequence_count": 2048, 00:06:08.335 "buf_count": 2048 00:06:08.335 } 00:06:08.335 } 00:06:08.335 ] 00:06:08.335 }, 00:06:08.335 { 00:06:08.335 "subsystem": "bdev", 00:06:08.335 "config": [ 00:06:08.335 { 00:06:08.335 "method": "bdev_set_options", 00:06:08.335 "params": { 00:06:08.335 "bdev_io_pool_size": 65535, 00:06:08.335 "bdev_io_cache_size": 256, 00:06:08.335 "bdev_auto_examine": true, 00:06:08.335 "iobuf_small_cache_size": 128, 00:06:08.335 "iobuf_large_cache_size": 16 00:06:08.335 } 00:06:08.335 }, 00:06:08.335 { 00:06:08.335 "method": "bdev_raid_set_options", 00:06:08.335 "params": { 00:06:08.335 "process_window_size_kb": 1024, 00:06:08.336 "process_max_bandwidth_mb_sec": 0 00:06:08.336 } 00:06:08.336 }, 00:06:08.336 { 00:06:08.336 "method": "bdev_iscsi_set_options", 00:06:08.336 "params": { 00:06:08.336 "timeout_sec": 30 00:06:08.336 } 00:06:08.336 }, 00:06:08.336 { 00:06:08.336 "method": "bdev_nvme_set_options", 00:06:08.336 "params": { 00:06:08.336 "action_on_timeout": "none", 00:06:08.336 "timeout_us": 0, 00:06:08.336 "timeout_admin_us": 0, 00:06:08.336 "keep_alive_timeout_ms": 10000, 00:06:08.336 "arbitration_burst": 0, 00:06:08.336 "low_priority_weight": 0, 00:06:08.336 "medium_priority_weight": 0, 00:06:08.336 "high_priority_weight": 0, 00:06:08.336 "nvme_adminq_poll_period_us": 10000, 00:06:08.336 "nvme_ioq_poll_period_us": 0, 00:06:08.336 "io_queue_requests": 0, 00:06:08.336 "delay_cmd_submit": true, 00:06:08.336 "transport_retry_count": 4, 00:06:08.336 "bdev_retry_count": 3, 00:06:08.336 "transport_ack_timeout": 0, 00:06:08.336 "ctrlr_loss_timeout_sec": 0, 00:06:08.336 "reconnect_delay_sec": 0, 00:06:08.336 "fast_io_fail_timeout_sec": 0, 00:06:08.336 "disable_auto_failback": false, 00:06:08.336 "generate_uuids": false, 00:06:08.336 "transport_tos": 0, 00:06:08.336 "nvme_error_stat": false, 00:06:08.336 "rdma_srq_size": 0, 00:06:08.336 "io_path_stat": false, 00:06:08.336 "allow_accel_sequence": false, 00:06:08.336 "rdma_max_cq_size": 0, 00:06:08.336 "rdma_cm_event_timeout_ms": 0, 00:06:08.336 "dhchap_digests": [ 00:06:08.336 "sha256", 00:06:08.336 "sha384", 00:06:08.336 "sha512" 00:06:08.336 ], 00:06:08.336 "dhchap_dhgroups": [ 00:06:08.336 "null", 00:06:08.336 "ffdhe2048", 00:06:08.336 "ffdhe3072", 00:06:08.336 "ffdhe4096", 00:06:08.336 "ffdhe6144", 00:06:08.336 "ffdhe8192" 00:06:08.336 ] 00:06:08.336 } 00:06:08.336 }, 00:06:08.336 { 00:06:08.336 "method": "bdev_nvme_set_hotplug", 00:06:08.336 "params": { 00:06:08.336 "period_us": 100000, 00:06:08.336 "enable": false 00:06:08.336 } 00:06:08.336 }, 00:06:08.336 { 00:06:08.336 "method": "bdev_wait_for_examine" 00:06:08.336 } 00:06:08.336 ] 00:06:08.336 }, 00:06:08.336 { 00:06:08.336 "subsystem": "scsi", 00:06:08.336 "config": null 00:06:08.336 }, 00:06:08.336 { 00:06:08.336 "subsystem": "scheduler", 00:06:08.336 "config": [ 00:06:08.336 { 00:06:08.336 "method": "framework_set_scheduler", 00:06:08.336 "params": { 00:06:08.336 "name": "static" 00:06:08.336 } 00:06:08.336 } 00:06:08.336 ] 00:06:08.336 }, 00:06:08.336 { 00:06:08.336 "subsystem": "vhost_scsi", 00:06:08.336 "config": [] 00:06:08.336 }, 00:06:08.336 { 00:06:08.336 "subsystem": "vhost_blk", 00:06:08.336 "config": [] 00:06:08.336 }, 00:06:08.336 { 00:06:08.336 "subsystem": "ublk", 00:06:08.336 "config": [] 00:06:08.336 }, 00:06:08.336 { 00:06:08.336 "subsystem": "nbd", 00:06:08.336 "config": [] 00:06:08.336 }, 00:06:08.336 { 00:06:08.336 "subsystem": "nvmf", 00:06:08.336 "config": [ 00:06:08.336 { 00:06:08.336 "method": "nvmf_set_config", 00:06:08.336 "params": { 00:06:08.336 "discovery_filter": "match_any", 00:06:08.336 "admin_cmd_passthru": { 00:06:08.336 "identify_ctrlr": false 00:06:08.336 }, 00:06:08.336 "dhchap_digests": [ 00:06:08.336 "sha256", 00:06:08.336 "sha384", 00:06:08.336 "sha512" 00:06:08.336 ], 00:06:08.336 "dhchap_dhgroups": [ 00:06:08.336 "null", 00:06:08.336 "ffdhe2048", 00:06:08.336 "ffdhe3072", 00:06:08.336 "ffdhe4096", 00:06:08.336 "ffdhe6144", 00:06:08.336 "ffdhe8192" 00:06:08.336 ] 00:06:08.336 } 00:06:08.336 }, 00:06:08.336 { 00:06:08.336 "method": "nvmf_set_max_subsystems", 00:06:08.336 "params": { 00:06:08.336 "max_subsystems": 1024 00:06:08.336 } 00:06:08.336 }, 00:06:08.336 { 00:06:08.336 "method": "nvmf_set_crdt", 00:06:08.336 "params": { 00:06:08.336 "crdt1": 0, 00:06:08.336 "crdt2": 0, 00:06:08.336 "crdt3": 0 00:06:08.336 } 00:06:08.336 }, 00:06:08.336 { 00:06:08.336 "method": "nvmf_create_transport", 00:06:08.336 "params": { 00:06:08.336 "trtype": "TCP", 00:06:08.336 "max_queue_depth": 128, 00:06:08.336 "max_io_qpairs_per_ctrlr": 127, 00:06:08.336 "in_capsule_data_size": 4096, 00:06:08.336 "max_io_size": 131072, 00:06:08.336 "io_unit_size": 131072, 00:06:08.336 "max_aq_depth": 128, 00:06:08.336 "num_shared_buffers": 511, 00:06:08.336 "buf_cache_size": 4294967295, 00:06:08.336 "dif_insert_or_strip": false, 00:06:08.336 "zcopy": false, 00:06:08.336 "c2h_success": true, 00:06:08.336 "sock_priority": 0, 00:06:08.336 "abort_timeout_sec": 1, 00:06:08.336 "ack_timeout": 0, 00:06:08.336 "data_wr_pool_size": 0 00:06:08.336 } 00:06:08.336 } 00:06:08.336 ] 00:06:08.336 }, 00:06:08.336 { 00:06:08.336 "subsystem": "iscsi", 00:06:08.336 "config": [ 00:06:08.336 { 00:06:08.336 "method": "iscsi_set_options", 00:06:08.336 "params": { 00:06:08.336 "node_base": "iqn.2016-06.io.spdk", 00:06:08.336 "max_sessions": 128, 00:06:08.336 "max_connections_per_session": 2, 00:06:08.336 "max_queue_depth": 64, 00:06:08.336 "default_time2wait": 2, 00:06:08.336 "default_time2retain": 20, 00:06:08.336 "first_burst_length": 8192, 00:06:08.336 "immediate_data": true, 00:06:08.336 "allow_duplicated_isid": false, 00:06:08.336 "error_recovery_level": 0, 00:06:08.336 "nop_timeout": 60, 00:06:08.336 "nop_in_interval": 30, 00:06:08.336 "disable_chap": false, 00:06:08.336 "require_chap": false, 00:06:08.336 "mutual_chap": false, 00:06:08.336 "chap_group": 0, 00:06:08.336 "max_large_datain_per_connection": 64, 00:06:08.336 "max_r2t_per_connection": 4, 00:06:08.336 "pdu_pool_size": 36864, 00:06:08.336 "immediate_data_pool_size": 16384, 00:06:08.336 "data_out_pool_size": 2048 00:06:08.336 } 00:06:08.336 } 00:06:08.336 ] 00:06:08.336 } 00:06:08.336 ] 00:06:08.336 } 00:06:08.336 09:23:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:08.336 09:23:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 96264 00:06:08.336 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 96264 ']' 00:06:08.336 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 96264 00:06:08.336 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:08.336 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.336 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96264 00:06:08.336 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.336 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.336 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96264' 00:06:08.336 killing process with pid 96264 00:06:08.336 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 96264 00:06:08.336 09:23:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 96264 00:06:08.597 09:23:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=96604 00:06:08.597 09:23:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:08.597 09:23:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 96604 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 96604 ']' 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 96604 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96604 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96604' 00:06:13.891 killing process with pid 96604 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 96604 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 96604 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:13.891 00:06:13.891 real 0m6.539s 00:06:13.891 user 0m6.453s 00:06:13.891 sys 0m0.541s 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.891 ************************************ 00:06:13.891 END TEST skip_rpc_with_json 00:06:13.891 ************************************ 00:06:13.891 09:24:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:13.891 09:24:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.891 09:24:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.891 09:24:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.891 ************************************ 00:06:13.891 START TEST skip_rpc_with_delay 00:06:13.891 ************************************ 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:13.891 [2024-11-19 09:24:00.548359] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.891 00:06:13.891 real 0m0.082s 00:06:13.891 user 0m0.052s 00:06:13.891 sys 0m0.030s 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.891 09:24:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:13.891 ************************************ 00:06:13.892 END TEST skip_rpc_with_delay 00:06:13.892 ************************************ 00:06:13.892 09:24:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:13.892 09:24:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:13.892 09:24:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:13.892 09:24:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.892 09:24:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.892 09:24:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.153 ************************************ 00:06:14.153 START TEST exit_on_failed_rpc_init 00:06:14.153 ************************************ 00:06:14.153 09:24:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:14.153 09:24:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=97703 00:06:14.153 09:24:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 97703 00:06:14.153 09:24:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.153 09:24:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 97703 ']' 00:06:14.153 09:24:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.153 09:24:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.153 09:24:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.153 09:24:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.153 09:24:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:14.153 [2024-11-19 09:24:00.705196] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:14.153 [2024-11-19 09:24:00.705256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97703 ] 00:06:14.153 [2024-11-19 09:24:00.793840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.153 [2024-11-19 09:24:00.828547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.096 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.096 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:15.096 09:24:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.096 09:24:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.096 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:15.096 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.097 [2024-11-19 09:24:01.563727] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:15.097 [2024-11-19 09:24:01.563780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97923 ] 00:06:15.097 [2024-11-19 09:24:01.649241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.097 [2024-11-19 09:24:01.685387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.097 [2024-11-19 09:24:01.685437] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:15.097 [2024-11-19 09:24:01.685447] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:15.097 [2024-11-19 09:24:01.685454] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 97703 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 97703 ']' 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 97703 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97703 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97703' 00:06:15.097 killing process with pid 97703 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 97703 00:06:15.097 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 97703 00:06:15.357 00:06:15.357 real 0m1.326s 00:06:15.357 user 0m1.559s 00:06:15.357 sys 0m0.379s 00:06:15.357 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.357 09:24:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:15.357 ************************************ 00:06:15.357 END TEST exit_on_failed_rpc_init 00:06:15.357 ************************************ 00:06:15.357 09:24:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:15.357 00:06:15.357 real 0m13.729s 00:06:15.357 user 0m13.309s 00:06:15.357 sys 0m1.566s 00:06:15.357 09:24:02 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.357 09:24:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.357 ************************************ 00:06:15.357 END TEST skip_rpc 00:06:15.357 ************************************ 00:06:15.357 09:24:02 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:15.357 09:24:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.357 09:24:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.357 09:24:02 -- common/autotest_common.sh@10 -- # set +x 00:06:15.357 ************************************ 00:06:15.357 START TEST rpc_client 00:06:15.357 ************************************ 00:06:15.357 09:24:02 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:15.618 * Looking for test storage... 00:06:15.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:15.618 09:24:02 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.618 09:24:02 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.618 09:24:02 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.618 09:24:02 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.618 09:24:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.619 09:24:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:15.619 09:24:02 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.619 09:24:02 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.619 --rc genhtml_branch_coverage=1 00:06:15.619 --rc genhtml_function_coverage=1 00:06:15.619 --rc genhtml_legend=1 00:06:15.619 --rc geninfo_all_blocks=1 00:06:15.619 --rc geninfo_unexecuted_blocks=1 00:06:15.619 00:06:15.619 ' 00:06:15.619 09:24:02 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.619 --rc genhtml_branch_coverage=1 00:06:15.619 --rc genhtml_function_coverage=1 00:06:15.619 --rc genhtml_legend=1 00:06:15.619 --rc geninfo_all_blocks=1 00:06:15.619 --rc geninfo_unexecuted_blocks=1 00:06:15.619 00:06:15.619 ' 00:06:15.619 09:24:02 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.619 --rc genhtml_branch_coverage=1 00:06:15.619 --rc genhtml_function_coverage=1 00:06:15.619 --rc genhtml_legend=1 00:06:15.619 --rc geninfo_all_blocks=1 00:06:15.619 --rc geninfo_unexecuted_blocks=1 00:06:15.619 00:06:15.619 ' 00:06:15.619 09:24:02 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.619 --rc genhtml_branch_coverage=1 00:06:15.619 --rc genhtml_function_coverage=1 00:06:15.619 --rc genhtml_legend=1 00:06:15.619 --rc geninfo_all_blocks=1 00:06:15.619 --rc geninfo_unexecuted_blocks=1 00:06:15.619 00:06:15.619 ' 00:06:15.619 09:24:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:15.619 OK 00:06:15.619 09:24:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:15.619 00:06:15.619 real 0m0.231s 00:06:15.619 user 0m0.130s 00:06:15.619 sys 0m0.113s 00:06:15.619 09:24:02 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.619 09:24:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:15.619 ************************************ 00:06:15.619 END TEST rpc_client 00:06:15.619 ************************************ 00:06:15.619 09:24:02 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:15.880 09:24:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.880 09:24:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.880 09:24:02 -- common/autotest_common.sh@10 -- # set +x 00:06:15.880 ************************************ 00:06:15.880 START TEST json_config 00:06:15.880 ************************************ 00:06:15.880 09:24:02 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:15.880 09:24:02 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.880 09:24:02 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.880 09:24:02 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.880 09:24:02 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.880 09:24:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.880 09:24:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.880 09:24:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.880 09:24:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.880 09:24:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.880 09:24:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.880 09:24:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.880 09:24:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.880 09:24:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.880 09:24:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.880 09:24:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.880 09:24:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:15.880 09:24:02 json_config -- scripts/common.sh@345 -- # : 1 00:06:15.880 09:24:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.881 09:24:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.881 09:24:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:15.881 09:24:02 json_config -- scripts/common.sh@353 -- # local d=1 00:06:15.881 09:24:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.881 09:24:02 json_config -- scripts/common.sh@355 -- # echo 1 00:06:15.881 09:24:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.881 09:24:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:15.881 09:24:02 json_config -- scripts/common.sh@353 -- # local d=2 00:06:15.881 09:24:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.881 09:24:02 json_config -- scripts/common.sh@355 -- # echo 2 00:06:15.881 09:24:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.881 09:24:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.881 09:24:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.881 09:24:02 json_config -- scripts/common.sh@368 -- # return 0 00:06:15.881 09:24:02 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.881 09:24:02 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.881 --rc genhtml_branch_coverage=1 00:06:15.881 --rc genhtml_function_coverage=1 00:06:15.881 --rc genhtml_legend=1 00:06:15.881 --rc geninfo_all_blocks=1 00:06:15.881 --rc geninfo_unexecuted_blocks=1 00:06:15.881 00:06:15.881 ' 00:06:15.881 09:24:02 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.881 --rc genhtml_branch_coverage=1 00:06:15.881 --rc genhtml_function_coverage=1 00:06:15.881 --rc genhtml_legend=1 00:06:15.881 --rc geninfo_all_blocks=1 00:06:15.881 --rc geninfo_unexecuted_blocks=1 00:06:15.881 00:06:15.881 ' 00:06:15.881 09:24:02 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.881 --rc genhtml_branch_coverage=1 00:06:15.881 --rc genhtml_function_coverage=1 00:06:15.881 --rc genhtml_legend=1 00:06:15.881 --rc geninfo_all_blocks=1 00:06:15.881 --rc geninfo_unexecuted_blocks=1 00:06:15.881 00:06:15.881 ' 00:06:15.881 09:24:02 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.881 --rc genhtml_branch_coverage=1 00:06:15.881 --rc genhtml_function_coverage=1 00:06:15.881 --rc genhtml_legend=1 00:06:15.881 --rc geninfo_all_blocks=1 00:06:15.881 --rc geninfo_unexecuted_blocks=1 00:06:15.881 00:06:15.881 ' 00:06:15.881 09:24:02 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.881 09:24:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.881 09:24:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.881 09:24:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.881 09:24:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.881 09:24:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.881 09:24:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.881 09:24:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.881 09:24:02 json_config -- paths/export.sh@5 -- # export PATH 00:06:15.881 09:24:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@51 -- # : 0 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.881 09:24:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:15.882 INFO: JSON configuration test init 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:15.882 09:24:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.882 09:24:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.882 09:24:02 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:15.882 09:24:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.882 09:24:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.143 09:24:02 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:16.143 09:24:02 json_config -- json_config/common.sh@9 -- # local app=target 00:06:16.143 09:24:02 json_config -- json_config/common.sh@10 -- # shift 00:06:16.143 09:24:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:16.143 09:24:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:16.143 09:24:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:16.143 09:24:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.143 09:24:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.143 09:24:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=98248 00:06:16.143 09:24:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:16.143 Waiting for target to run... 00:06:16.143 09:24:02 json_config -- json_config/common.sh@25 -- # waitforlisten 98248 /var/tmp/spdk_tgt.sock 00:06:16.143 09:24:02 json_config -- common/autotest_common.sh@835 -- # '[' -z 98248 ']' 00:06:16.143 09:24:02 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:16.144 09:24:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:16.144 09:24:02 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.144 09:24:02 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:16.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:16.144 09:24:02 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.144 09:24:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.144 [2024-11-19 09:24:02.689911] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:16.144 [2024-11-19 09:24:02.689984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98248 ] 00:06:16.405 [2024-11-19 09:24:03.132812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.666 [2024-11-19 09:24:03.165837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.927 09:24:03 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.927 09:24:03 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:16.927 09:24:03 json_config -- json_config/common.sh@26 -- # echo '' 00:06:16.927 00:06:16.927 09:24:03 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:16.927 09:24:03 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:16.927 09:24:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.927 09:24:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.927 09:24:03 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:16.927 09:24:03 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:16.927 09:24:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:16.927 09:24:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.927 09:24:03 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:16.927 09:24:03 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:16.927 09:24:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:17.499 09:24:04 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:17.499 09:24:04 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:17.499 09:24:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.499 09:24:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.499 09:24:04 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:17.499 09:24:04 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:17.499 09:24:04 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:17.499 09:24:04 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:17.499 09:24:04 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:17.499 09:24:04 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:17.499 09:24:04 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:17.499 09:24:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@54 -- # sort 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:17.761 09:24:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.761 09:24:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:17.761 09:24:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.761 09:24:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:17.761 09:24:04 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:17.761 09:24:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:17.761 MallocForNvmf0 00:06:18.022 09:24:04 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:18.022 09:24:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:18.022 MallocForNvmf1 00:06:18.022 09:24:04 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:18.022 09:24:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:18.283 [2024-11-19 09:24:04.862005] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.283 09:24:04 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:18.283 09:24:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:18.544 09:24:05 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:18.545 09:24:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:18.545 09:24:05 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:18.545 09:24:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:18.806 09:24:05 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:18.806 09:24:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:19.067 [2024-11-19 09:24:05.576151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:19.067 09:24:05 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:19.067 09:24:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.067 09:24:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.067 09:24:05 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:19.067 09:24:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.067 09:24:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.067 09:24:05 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:19.067 09:24:05 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:19.067 09:24:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:19.328 MallocBdevForConfigChangeCheck 00:06:19.328 09:24:05 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:19.328 09:24:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.328 09:24:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.328 09:24:05 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:19.328 09:24:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.590 09:24:06 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:19.590 INFO: shutting down applications... 00:06:19.590 09:24:06 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:19.590 09:24:06 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:19.590 09:24:06 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:19.590 09:24:06 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:20.162 Calling clear_iscsi_subsystem 00:06:20.162 Calling clear_nvmf_subsystem 00:06:20.162 Calling clear_nbd_subsystem 00:06:20.162 Calling clear_ublk_subsystem 00:06:20.162 Calling clear_vhost_blk_subsystem 00:06:20.162 Calling clear_vhost_scsi_subsystem 00:06:20.162 Calling clear_bdev_subsystem 00:06:20.162 09:24:06 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:20.162 09:24:06 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:20.162 09:24:06 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:20.162 09:24:06 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:20.162 09:24:06 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:20.162 09:24:06 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:20.424 09:24:07 json_config -- json_config/json_config.sh@352 -- # break 00:06:20.424 09:24:07 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:20.424 09:24:07 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:20.424 09:24:07 json_config -- json_config/common.sh@31 -- # local app=target 00:06:20.424 09:24:07 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:20.424 09:24:07 json_config -- json_config/common.sh@35 -- # [[ -n 98248 ]] 00:06:20.424 09:24:07 json_config -- json_config/common.sh@38 -- # kill -SIGINT 98248 00:06:20.424 09:24:07 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:20.424 09:24:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.424 09:24:07 json_config -- json_config/common.sh@41 -- # kill -0 98248 00:06:20.424 09:24:07 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:20.998 09:24:07 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:20.998 09:24:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.998 09:24:07 json_config -- json_config/common.sh@41 -- # kill -0 98248 00:06:20.998 09:24:07 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:20.998 09:24:07 json_config -- json_config/common.sh@43 -- # break 00:06:20.998 09:24:07 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:20.998 09:24:07 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:20.998 SPDK target shutdown done 00:06:20.998 09:24:07 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:20.998 INFO: relaunching applications... 00:06:20.998 09:24:07 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:20.998 09:24:07 json_config -- json_config/common.sh@9 -- # local app=target 00:06:20.998 09:24:07 json_config -- json_config/common.sh@10 -- # shift 00:06:20.998 09:24:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:20.998 09:24:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:20.998 09:24:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:20.998 09:24:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.998 09:24:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.998 09:24:07 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:20.998 09:24:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=99385 00:06:20.998 09:24:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:20.998 Waiting for target to run... 00:06:20.998 09:24:07 json_config -- json_config/common.sh@25 -- # waitforlisten 99385 /var/tmp/spdk_tgt.sock 00:06:20.998 09:24:07 json_config -- common/autotest_common.sh@835 -- # '[' -z 99385 ']' 00:06:20.998 09:24:07 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:20.998 09:24:07 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.998 09:24:07 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:20.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:20.998 09:24:07 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.998 09:24:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.998 [2024-11-19 09:24:07.545781] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:20.998 [2024-11-19 09:24:07.545847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99385 ] 00:06:21.261 [2024-11-19 09:24:07.966553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.261 [2024-11-19 09:24:08.000687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.834 [2024-11-19 09:24:08.500123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.834 [2024-11-19 09:24:08.532492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:21.834 09:24:08 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.834 09:24:08 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:21.834 09:24:08 json_config -- json_config/common.sh@26 -- # echo '' 00:06:21.834 00:06:21.834 09:24:08 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:21.834 09:24:08 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:21.834 INFO: Checking if target configuration is the same... 00:06:21.834 09:24:08 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.834 09:24:08 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:21.834 09:24:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:22.095 + '[' 2 -ne 2 ']' 00:06:22.095 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:22.095 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:22.095 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:22.095 +++ basename /dev/fd/62 00:06:22.095 ++ mktemp /tmp/62.XXX 00:06:22.095 + tmp_file_1=/tmp/62.jAI 00:06:22.095 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.095 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:22.095 + tmp_file_2=/tmp/spdk_tgt_config.json.e6I 00:06:22.095 + ret=0 00:06:22.095 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.356 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.356 + diff -u /tmp/62.jAI /tmp/spdk_tgt_config.json.e6I 00:06:22.356 + echo 'INFO: JSON config files are the same' 00:06:22.356 INFO: JSON config files are the same 00:06:22.356 + rm /tmp/62.jAI /tmp/spdk_tgt_config.json.e6I 00:06:22.356 + exit 0 00:06:22.356 09:24:08 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:22.356 09:24:08 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:22.356 INFO: changing configuration and checking if this can be detected... 00:06:22.356 09:24:08 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:22.356 09:24:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:22.615 09:24:09 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.615 09:24:09 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:22.615 09:24:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:22.615 + '[' 2 -ne 2 ']' 00:06:22.615 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:22.615 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:22.615 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:22.615 +++ basename /dev/fd/62 00:06:22.615 ++ mktemp /tmp/62.XXX 00:06:22.615 + tmp_file_1=/tmp/62.JV5 00:06:22.616 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.616 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:22.616 + tmp_file_2=/tmp/spdk_tgt_config.json.0gK 00:06:22.616 + ret=0 00:06:22.616 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.876 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.876 + diff -u /tmp/62.JV5 /tmp/spdk_tgt_config.json.0gK 00:06:22.876 + ret=1 00:06:22.876 + echo '=== Start of file: /tmp/62.JV5 ===' 00:06:22.876 + cat /tmp/62.JV5 00:06:22.876 + echo '=== End of file: /tmp/62.JV5 ===' 00:06:22.876 + echo '' 00:06:22.876 + echo '=== Start of file: /tmp/spdk_tgt_config.json.0gK ===' 00:06:22.876 + cat /tmp/spdk_tgt_config.json.0gK 00:06:22.876 + echo '=== End of file: /tmp/spdk_tgt_config.json.0gK ===' 00:06:22.876 + echo '' 00:06:22.876 + rm /tmp/62.JV5 /tmp/spdk_tgt_config.json.0gK 00:06:22.876 + exit 1 00:06:22.876 09:24:09 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:22.876 INFO: configuration change detected. 00:06:22.876 09:24:09 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:22.876 09:24:09 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:22.876 09:24:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:22.876 09:24:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.876 09:24:09 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:22.876 09:24:09 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:22.876 09:24:09 json_config -- json_config/json_config.sh@324 -- # [[ -n 99385 ]] 00:06:22.876 09:24:09 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:22.876 09:24:09 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:22.876 09:24:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:22.876 09:24:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.876 09:24:09 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:22.876 09:24:09 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:22.876 09:24:09 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:22.876 09:24:09 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:22.876 09:24:09 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:22.876 09:24:09 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:22.876 09:24:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:22.876 09:24:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.876 09:24:09 json_config -- json_config/json_config.sh@330 -- # killprocess 99385 00:06:22.876 09:24:09 json_config -- common/autotest_common.sh@954 -- # '[' -z 99385 ']' 00:06:22.876 09:24:09 json_config -- common/autotest_common.sh@958 -- # kill -0 99385 00:06:22.876 09:24:09 json_config -- common/autotest_common.sh@959 -- # uname 00:06:22.876 09:24:09 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.876 09:24:09 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99385 00:06:23.137 09:24:09 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.137 09:24:09 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.137 09:24:09 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99385' 00:06:23.137 killing process with pid 99385 00:06:23.137 09:24:09 json_config -- common/autotest_common.sh@973 -- # kill 99385 00:06:23.137 09:24:09 json_config -- common/autotest_common.sh@978 -- # wait 99385 00:06:23.398 09:24:09 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:23.398 09:24:09 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:23.398 09:24:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:23.398 09:24:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.399 09:24:09 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:23.399 09:24:09 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:23.399 INFO: Success 00:06:23.399 00:06:23.399 real 0m7.562s 00:06:23.399 user 0m8.974s 00:06:23.399 sys 0m2.171s 00:06:23.399 09:24:09 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.399 09:24:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.399 ************************************ 00:06:23.399 END TEST json_config 00:06:23.399 ************************************ 00:06:23.399 09:24:10 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:23.399 09:24:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.399 09:24:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.399 09:24:10 -- common/autotest_common.sh@10 -- # set +x 00:06:23.399 ************************************ 00:06:23.399 START TEST json_config_extra_key 00:06:23.399 ************************************ 00:06:23.399 09:24:10 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:23.399 09:24:10 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.399 09:24:10 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.399 09:24:10 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.660 09:24:10 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.660 09:24:10 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:23.660 09:24:10 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.660 09:24:10 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.660 --rc genhtml_branch_coverage=1 00:06:23.660 --rc genhtml_function_coverage=1 00:06:23.660 --rc genhtml_legend=1 00:06:23.660 --rc geninfo_all_blocks=1 00:06:23.660 --rc geninfo_unexecuted_blocks=1 00:06:23.660 00:06:23.660 ' 00:06:23.660 09:24:10 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.660 --rc genhtml_branch_coverage=1 00:06:23.660 --rc genhtml_function_coverage=1 00:06:23.660 --rc genhtml_legend=1 00:06:23.660 --rc geninfo_all_blocks=1 00:06:23.660 --rc geninfo_unexecuted_blocks=1 00:06:23.660 00:06:23.660 ' 00:06:23.660 09:24:10 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.660 --rc genhtml_branch_coverage=1 00:06:23.660 --rc genhtml_function_coverage=1 00:06:23.660 --rc genhtml_legend=1 00:06:23.660 --rc geninfo_all_blocks=1 00:06:23.660 --rc geninfo_unexecuted_blocks=1 00:06:23.660 00:06:23.660 ' 00:06:23.660 09:24:10 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.660 --rc genhtml_branch_coverage=1 00:06:23.660 --rc genhtml_function_coverage=1 00:06:23.660 --rc genhtml_legend=1 00:06:23.660 --rc geninfo_all_blocks=1 00:06:23.660 --rc geninfo_unexecuted_blocks=1 00:06:23.660 00:06:23.660 ' 00:06:23.660 09:24:10 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.660 09:24:10 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:23.660 09:24:10 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.660 09:24:10 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.660 09:24:10 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.661 09:24:10 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.661 09:24:10 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.661 09:24:10 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.661 09:24:10 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.661 09:24:10 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.661 09:24:10 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.661 09:24:10 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.661 09:24:10 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:23.661 09:24:10 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.661 09:24:10 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.661 09:24:10 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:23.661 09:24:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:23.661 09:24:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:23.661 09:24:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:23.661 09:24:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:23.661 09:24:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:23.661 09:24:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:23.661 09:24:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:23.661 09:24:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:23.661 09:24:10 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:23.661 09:24:10 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:23.661 INFO: launching applications... 00:06:23.661 09:24:10 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:23.661 09:24:10 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:23.661 09:24:10 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:23.661 09:24:10 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:23.661 09:24:10 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:23.661 09:24:10 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:23.661 09:24:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.661 09:24:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.661 09:24:10 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=100407 00:06:23.661 09:24:10 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:23.661 Waiting for target to run... 00:06:23.661 09:24:10 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 100407 /var/tmp/spdk_tgt.sock 00:06:23.661 09:24:10 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 100407 ']' 00:06:23.661 09:24:10 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:23.661 09:24:10 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:23.661 09:24:10 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.661 09:24:10 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:23.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:23.661 09:24:10 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.661 09:24:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:23.661 [2024-11-19 09:24:10.315489] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:23.661 [2024-11-19 09:24:10.315567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100407 ] 00:06:23.923 [2024-11-19 09:24:10.610877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.923 [2024-11-19 09:24:10.635395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.495 09:24:11 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.495 09:24:11 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:24.495 09:24:11 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:24.495 00:06:24.495 09:24:11 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:24.495 INFO: shutting down applications... 00:06:24.495 09:24:11 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:24.495 09:24:11 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:24.495 09:24:11 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:24.495 09:24:11 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 100407 ]] 00:06:24.495 09:24:11 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 100407 00:06:24.495 09:24:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:24.495 09:24:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:24.495 09:24:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 100407 00:06:24.495 09:24:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:25.069 09:24:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:25.069 09:24:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.069 09:24:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 100407 00:06:25.069 09:24:11 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:25.069 09:24:11 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:25.069 09:24:11 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:25.069 09:24:11 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:25.069 SPDK target shutdown done 00:06:25.069 09:24:11 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:25.069 Success 00:06:25.069 00:06:25.069 real 0m1.571s 00:06:25.069 user 0m1.175s 00:06:25.069 sys 0m0.415s 00:06:25.069 09:24:11 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.069 09:24:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:25.069 ************************************ 00:06:25.069 END TEST json_config_extra_key 00:06:25.069 ************************************ 00:06:25.069 09:24:11 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:25.069 09:24:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.069 09:24:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.069 09:24:11 -- common/autotest_common.sh@10 -- # set +x 00:06:25.069 ************************************ 00:06:25.069 START TEST alias_rpc 00:06:25.069 ************************************ 00:06:25.069 09:24:11 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:25.069 * Looking for test storage... 00:06:25.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:25.069 09:24:11 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:25.069 09:24:11 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:25.069 09:24:11 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:25.330 09:24:11 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.330 09:24:11 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:25.331 09:24:11 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:25.331 09:24:11 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.331 09:24:11 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:25.331 09:24:11 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.331 09:24:11 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.331 09:24:11 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.331 09:24:11 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:25.331 09:24:11 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.331 09:24:11 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:25.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.331 --rc genhtml_branch_coverage=1 00:06:25.331 --rc genhtml_function_coverage=1 00:06:25.331 --rc genhtml_legend=1 00:06:25.331 --rc geninfo_all_blocks=1 00:06:25.331 --rc geninfo_unexecuted_blocks=1 00:06:25.331 00:06:25.331 ' 00:06:25.331 09:24:11 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:25.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.331 --rc genhtml_branch_coverage=1 00:06:25.331 --rc genhtml_function_coverage=1 00:06:25.331 --rc genhtml_legend=1 00:06:25.331 --rc geninfo_all_blocks=1 00:06:25.331 --rc geninfo_unexecuted_blocks=1 00:06:25.331 00:06:25.331 ' 00:06:25.331 09:24:11 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:25.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.331 --rc genhtml_branch_coverage=1 00:06:25.331 --rc genhtml_function_coverage=1 00:06:25.331 --rc genhtml_legend=1 00:06:25.331 --rc geninfo_all_blocks=1 00:06:25.331 --rc geninfo_unexecuted_blocks=1 00:06:25.331 00:06:25.331 ' 00:06:25.331 09:24:11 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:25.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.331 --rc genhtml_branch_coverage=1 00:06:25.331 --rc genhtml_function_coverage=1 00:06:25.331 --rc genhtml_legend=1 00:06:25.331 --rc geninfo_all_blocks=1 00:06:25.331 --rc geninfo_unexecuted_blocks=1 00:06:25.331 00:06:25.331 ' 00:06:25.331 09:24:11 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:25.331 09:24:11 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=100980 00:06:25.331 09:24:11 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 100980 00:06:25.331 09:24:11 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:25.331 09:24:11 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 100980 ']' 00:06:25.331 09:24:11 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.331 09:24:11 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.331 09:24:11 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.331 09:24:11 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.331 09:24:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.331 [2024-11-19 09:24:11.965028] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:25.331 [2024-11-19 09:24:11.965097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100980 ] 00:06:25.331 [2024-11-19 09:24:12.050009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.591 [2024-11-19 09:24:12.081076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.163 09:24:12 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.163 09:24:12 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:26.163 09:24:12 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:26.425 09:24:12 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 100980 00:06:26.425 09:24:12 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 100980 ']' 00:06:26.425 09:24:12 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 100980 00:06:26.425 09:24:12 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:26.425 09:24:12 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.425 09:24:12 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100980 00:06:26.425 09:24:13 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.425 09:24:13 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.425 09:24:13 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100980' 00:06:26.425 killing process with pid 100980 00:06:26.425 09:24:13 alias_rpc -- common/autotest_common.sh@973 -- # kill 100980 00:06:26.425 09:24:13 alias_rpc -- common/autotest_common.sh@978 -- # wait 100980 00:06:26.687 00:06:26.687 real 0m1.518s 00:06:26.687 user 0m1.657s 00:06:26.687 sys 0m0.442s 00:06:26.687 09:24:13 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.687 09:24:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.687 ************************************ 00:06:26.687 END TEST alias_rpc 00:06:26.687 ************************************ 00:06:26.687 09:24:13 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:26.687 09:24:13 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:26.687 09:24:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.687 09:24:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.687 09:24:13 -- common/autotest_common.sh@10 -- # set +x 00:06:26.687 ************************************ 00:06:26.687 START TEST spdkcli_tcp 00:06:26.687 ************************************ 00:06:26.687 09:24:13 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:26.687 * Looking for test storage... 00:06:26.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:26.687 09:24:13 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.687 09:24:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.687 09:24:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.950 09:24:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.950 09:24:13 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:26.950 09:24:13 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.950 09:24:13 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.950 --rc genhtml_branch_coverage=1 00:06:26.950 --rc genhtml_function_coverage=1 00:06:26.950 --rc genhtml_legend=1 00:06:26.950 --rc geninfo_all_blocks=1 00:06:26.950 --rc geninfo_unexecuted_blocks=1 00:06:26.950 00:06:26.950 ' 00:06:26.950 09:24:13 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.950 --rc genhtml_branch_coverage=1 00:06:26.950 --rc genhtml_function_coverage=1 00:06:26.950 --rc genhtml_legend=1 00:06:26.950 --rc geninfo_all_blocks=1 00:06:26.950 --rc geninfo_unexecuted_blocks=1 00:06:26.950 00:06:26.950 ' 00:06:26.950 09:24:13 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.950 --rc genhtml_branch_coverage=1 00:06:26.950 --rc genhtml_function_coverage=1 00:06:26.950 --rc genhtml_legend=1 00:06:26.950 --rc geninfo_all_blocks=1 00:06:26.950 --rc geninfo_unexecuted_blocks=1 00:06:26.950 00:06:26.950 ' 00:06:26.950 09:24:13 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.950 --rc genhtml_branch_coverage=1 00:06:26.950 --rc genhtml_function_coverage=1 00:06:26.950 --rc genhtml_legend=1 00:06:26.950 --rc geninfo_all_blocks=1 00:06:26.950 --rc geninfo_unexecuted_blocks=1 00:06:26.950 00:06:26.950 ' 00:06:26.950 09:24:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:26.950 09:24:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:26.950 09:24:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:26.950 09:24:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:26.950 09:24:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:26.950 09:24:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:26.950 09:24:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:26.950 09:24:13 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.950 09:24:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.950 09:24:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=101322 00:06:26.950 09:24:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 101322 00:06:26.950 09:24:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:26.950 09:24:13 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 101322 ']' 00:06:26.950 09:24:13 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.950 09:24:13 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.950 09:24:13 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.950 09:24:13 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.950 09:24:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.950 [2024-11-19 09:24:13.560602] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:26.950 [2024-11-19 09:24:13.560672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101322 ] 00:06:26.950 [2024-11-19 09:24:13.650792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.950 [2024-11-19 09:24:13.693625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.950 [2024-11-19 09:24:13.693625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.894 09:24:14 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.894 09:24:14 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:27.894 09:24:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:27.894 09:24:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=101438 00:06:27.894 09:24:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:27.894 [ 00:06:27.894 "bdev_malloc_delete", 00:06:27.894 "bdev_malloc_create", 00:06:27.894 "bdev_null_resize", 00:06:27.894 "bdev_null_delete", 00:06:27.894 "bdev_null_create", 00:06:27.894 "bdev_nvme_cuse_unregister", 00:06:27.894 "bdev_nvme_cuse_register", 00:06:27.894 "bdev_opal_new_user", 00:06:27.894 "bdev_opal_set_lock_state", 00:06:27.894 "bdev_opal_delete", 00:06:27.894 "bdev_opal_get_info", 00:06:27.894 "bdev_opal_create", 00:06:27.894 "bdev_nvme_opal_revert", 00:06:27.894 "bdev_nvme_opal_init", 00:06:27.894 "bdev_nvme_send_cmd", 00:06:27.894 "bdev_nvme_set_keys", 00:06:27.894 "bdev_nvme_get_path_iostat", 00:06:27.894 "bdev_nvme_get_mdns_discovery_info", 00:06:27.894 "bdev_nvme_stop_mdns_discovery", 00:06:27.894 "bdev_nvme_start_mdns_discovery", 00:06:27.894 "bdev_nvme_set_multipath_policy", 00:06:27.894 "bdev_nvme_set_preferred_path", 00:06:27.894 "bdev_nvme_get_io_paths", 00:06:27.894 "bdev_nvme_remove_error_injection", 00:06:27.894 "bdev_nvme_add_error_injection", 00:06:27.894 "bdev_nvme_get_discovery_info", 00:06:27.894 "bdev_nvme_stop_discovery", 00:06:27.894 "bdev_nvme_start_discovery", 00:06:27.894 "bdev_nvme_get_controller_health_info", 00:06:27.894 "bdev_nvme_disable_controller", 00:06:27.894 "bdev_nvme_enable_controller", 00:06:27.894 "bdev_nvme_reset_controller", 00:06:27.894 "bdev_nvme_get_transport_statistics", 00:06:27.894 "bdev_nvme_apply_firmware", 00:06:27.894 "bdev_nvme_detach_controller", 00:06:27.894 "bdev_nvme_get_controllers", 00:06:27.894 "bdev_nvme_attach_controller", 00:06:27.894 "bdev_nvme_set_hotplug", 00:06:27.894 "bdev_nvme_set_options", 00:06:27.894 "bdev_passthru_delete", 00:06:27.894 "bdev_passthru_create", 00:06:27.894 "bdev_lvol_set_parent_bdev", 00:06:27.894 "bdev_lvol_set_parent", 00:06:27.894 "bdev_lvol_check_shallow_copy", 00:06:27.894 "bdev_lvol_start_shallow_copy", 00:06:27.894 "bdev_lvol_grow_lvstore", 00:06:27.894 "bdev_lvol_get_lvols", 00:06:27.894 "bdev_lvol_get_lvstores", 00:06:27.894 "bdev_lvol_delete", 00:06:27.894 "bdev_lvol_set_read_only", 00:06:27.894 "bdev_lvol_resize", 00:06:27.894 "bdev_lvol_decouple_parent", 00:06:27.894 "bdev_lvol_inflate", 00:06:27.894 "bdev_lvol_rename", 00:06:27.894 "bdev_lvol_clone_bdev", 00:06:27.894 "bdev_lvol_clone", 00:06:27.894 "bdev_lvol_snapshot", 00:06:27.894 "bdev_lvol_create", 00:06:27.894 "bdev_lvol_delete_lvstore", 00:06:27.894 "bdev_lvol_rename_lvstore", 00:06:27.894 "bdev_lvol_create_lvstore", 00:06:27.894 "bdev_raid_set_options", 00:06:27.894 "bdev_raid_remove_base_bdev", 00:06:27.894 "bdev_raid_add_base_bdev", 00:06:27.894 "bdev_raid_delete", 00:06:27.894 "bdev_raid_create", 00:06:27.894 "bdev_raid_get_bdevs", 00:06:27.894 "bdev_error_inject_error", 00:06:27.894 "bdev_error_delete", 00:06:27.894 "bdev_error_create", 00:06:27.894 "bdev_split_delete", 00:06:27.894 "bdev_split_create", 00:06:27.894 "bdev_delay_delete", 00:06:27.894 "bdev_delay_create", 00:06:27.894 "bdev_delay_update_latency", 00:06:27.894 "bdev_zone_block_delete", 00:06:27.894 "bdev_zone_block_create", 00:06:27.894 "blobfs_create", 00:06:27.894 "blobfs_detect", 00:06:27.894 "blobfs_set_cache_size", 00:06:27.894 "bdev_aio_delete", 00:06:27.894 "bdev_aio_rescan", 00:06:27.894 "bdev_aio_create", 00:06:27.894 "bdev_ftl_set_property", 00:06:27.894 "bdev_ftl_get_properties", 00:06:27.894 "bdev_ftl_get_stats", 00:06:27.894 "bdev_ftl_unmap", 00:06:27.894 "bdev_ftl_unload", 00:06:27.894 "bdev_ftl_delete", 00:06:27.894 "bdev_ftl_load", 00:06:27.894 "bdev_ftl_create", 00:06:27.894 "bdev_virtio_attach_controller", 00:06:27.894 "bdev_virtio_scsi_get_devices", 00:06:27.894 "bdev_virtio_detach_controller", 00:06:27.894 "bdev_virtio_blk_set_hotplug", 00:06:27.894 "bdev_iscsi_delete", 00:06:27.894 "bdev_iscsi_create", 00:06:27.894 "bdev_iscsi_set_options", 00:06:27.894 "accel_error_inject_error", 00:06:27.894 "ioat_scan_accel_module", 00:06:27.894 "dsa_scan_accel_module", 00:06:27.894 "iaa_scan_accel_module", 00:06:27.894 "vfu_virtio_create_fs_endpoint", 00:06:27.894 "vfu_virtio_create_scsi_endpoint", 00:06:27.894 "vfu_virtio_scsi_remove_target", 00:06:27.894 "vfu_virtio_scsi_add_target", 00:06:27.894 "vfu_virtio_create_blk_endpoint", 00:06:27.894 "vfu_virtio_delete_endpoint", 00:06:27.894 "keyring_file_remove_key", 00:06:27.894 "keyring_file_add_key", 00:06:27.894 "keyring_linux_set_options", 00:06:27.894 "fsdev_aio_delete", 00:06:27.894 "fsdev_aio_create", 00:06:27.894 "iscsi_get_histogram", 00:06:27.894 "iscsi_enable_histogram", 00:06:27.894 "iscsi_set_options", 00:06:27.894 "iscsi_get_auth_groups", 00:06:27.894 "iscsi_auth_group_remove_secret", 00:06:27.894 "iscsi_auth_group_add_secret", 00:06:27.894 "iscsi_delete_auth_group", 00:06:27.894 "iscsi_create_auth_group", 00:06:27.894 "iscsi_set_discovery_auth", 00:06:27.894 "iscsi_get_options", 00:06:27.894 "iscsi_target_node_request_logout", 00:06:27.894 "iscsi_target_node_set_redirect", 00:06:27.894 "iscsi_target_node_set_auth", 00:06:27.894 "iscsi_target_node_add_lun", 00:06:27.894 "iscsi_get_stats", 00:06:27.894 "iscsi_get_connections", 00:06:27.894 "iscsi_portal_group_set_auth", 00:06:27.894 "iscsi_start_portal_group", 00:06:27.894 "iscsi_delete_portal_group", 00:06:27.894 "iscsi_create_portal_group", 00:06:27.894 "iscsi_get_portal_groups", 00:06:27.894 "iscsi_delete_target_node", 00:06:27.894 "iscsi_target_node_remove_pg_ig_maps", 00:06:27.894 "iscsi_target_node_add_pg_ig_maps", 00:06:27.894 "iscsi_create_target_node", 00:06:27.894 "iscsi_get_target_nodes", 00:06:27.894 "iscsi_delete_initiator_group", 00:06:27.894 "iscsi_initiator_group_remove_initiators", 00:06:27.894 "iscsi_initiator_group_add_initiators", 00:06:27.894 "iscsi_create_initiator_group", 00:06:27.894 "iscsi_get_initiator_groups", 00:06:27.894 "nvmf_set_crdt", 00:06:27.894 "nvmf_set_config", 00:06:27.894 "nvmf_set_max_subsystems", 00:06:27.894 "nvmf_stop_mdns_prr", 00:06:27.894 "nvmf_publish_mdns_prr", 00:06:27.894 "nvmf_subsystem_get_listeners", 00:06:27.894 "nvmf_subsystem_get_qpairs", 00:06:27.894 "nvmf_subsystem_get_controllers", 00:06:27.894 "nvmf_get_stats", 00:06:27.894 "nvmf_get_transports", 00:06:27.894 "nvmf_create_transport", 00:06:27.894 "nvmf_get_targets", 00:06:27.894 "nvmf_delete_target", 00:06:27.894 "nvmf_create_target", 00:06:27.894 "nvmf_subsystem_allow_any_host", 00:06:27.894 "nvmf_subsystem_set_keys", 00:06:27.895 "nvmf_subsystem_remove_host", 00:06:27.895 "nvmf_subsystem_add_host", 00:06:27.895 "nvmf_ns_remove_host", 00:06:27.895 "nvmf_ns_add_host", 00:06:27.895 "nvmf_subsystem_remove_ns", 00:06:27.895 "nvmf_subsystem_set_ns_ana_group", 00:06:27.895 "nvmf_subsystem_add_ns", 00:06:27.895 "nvmf_subsystem_listener_set_ana_state", 00:06:27.895 "nvmf_discovery_get_referrals", 00:06:27.895 "nvmf_discovery_remove_referral", 00:06:27.895 "nvmf_discovery_add_referral", 00:06:27.895 "nvmf_subsystem_remove_listener", 00:06:27.895 "nvmf_subsystem_add_listener", 00:06:27.895 "nvmf_delete_subsystem", 00:06:27.895 "nvmf_create_subsystem", 00:06:27.895 "nvmf_get_subsystems", 00:06:27.895 "env_dpdk_get_mem_stats", 00:06:27.895 "nbd_get_disks", 00:06:27.895 "nbd_stop_disk", 00:06:27.895 "nbd_start_disk", 00:06:27.895 "ublk_recover_disk", 00:06:27.895 "ublk_get_disks", 00:06:27.895 "ublk_stop_disk", 00:06:27.895 "ublk_start_disk", 00:06:27.895 "ublk_destroy_target", 00:06:27.895 "ublk_create_target", 00:06:27.895 "virtio_blk_create_transport", 00:06:27.895 "virtio_blk_get_transports", 00:06:27.895 "vhost_controller_set_coalescing", 00:06:27.895 "vhost_get_controllers", 00:06:27.895 "vhost_delete_controller", 00:06:27.895 "vhost_create_blk_controller", 00:06:27.895 "vhost_scsi_controller_remove_target", 00:06:27.895 "vhost_scsi_controller_add_target", 00:06:27.895 "vhost_start_scsi_controller", 00:06:27.895 "vhost_create_scsi_controller", 00:06:27.895 "thread_set_cpumask", 00:06:27.895 "scheduler_set_options", 00:06:27.895 "framework_get_governor", 00:06:27.895 "framework_get_scheduler", 00:06:27.895 "framework_set_scheduler", 00:06:27.895 "framework_get_reactors", 00:06:27.895 "thread_get_io_channels", 00:06:27.895 "thread_get_pollers", 00:06:27.895 "thread_get_stats", 00:06:27.895 "framework_monitor_context_switch", 00:06:27.895 "spdk_kill_instance", 00:06:27.895 "log_enable_timestamps", 00:06:27.895 "log_get_flags", 00:06:27.895 "log_clear_flag", 00:06:27.895 "log_set_flag", 00:06:27.895 "log_get_level", 00:06:27.895 "log_set_level", 00:06:27.895 "log_get_print_level", 00:06:27.895 "log_set_print_level", 00:06:27.895 "framework_enable_cpumask_locks", 00:06:27.895 "framework_disable_cpumask_locks", 00:06:27.895 "framework_wait_init", 00:06:27.895 "framework_start_init", 00:06:27.895 "scsi_get_devices", 00:06:27.895 "bdev_get_histogram", 00:06:27.895 "bdev_enable_histogram", 00:06:27.895 "bdev_set_qos_limit", 00:06:27.895 "bdev_set_qd_sampling_period", 00:06:27.895 "bdev_get_bdevs", 00:06:27.895 "bdev_reset_iostat", 00:06:27.895 "bdev_get_iostat", 00:06:27.895 "bdev_examine", 00:06:27.895 "bdev_wait_for_examine", 00:06:27.895 "bdev_set_options", 00:06:27.895 "accel_get_stats", 00:06:27.895 "accel_set_options", 00:06:27.895 "accel_set_driver", 00:06:27.895 "accel_crypto_key_destroy", 00:06:27.895 "accel_crypto_keys_get", 00:06:27.895 "accel_crypto_key_create", 00:06:27.895 "accel_assign_opc", 00:06:27.895 "accel_get_module_info", 00:06:27.895 "accel_get_opc_assignments", 00:06:27.895 "vmd_rescan", 00:06:27.895 "vmd_remove_device", 00:06:27.895 "vmd_enable", 00:06:27.895 "sock_get_default_impl", 00:06:27.895 "sock_set_default_impl", 00:06:27.895 "sock_impl_set_options", 00:06:27.895 "sock_impl_get_options", 00:06:27.895 "iobuf_get_stats", 00:06:27.895 "iobuf_set_options", 00:06:27.895 "keyring_get_keys", 00:06:27.895 "vfu_tgt_set_base_path", 00:06:27.895 "framework_get_pci_devices", 00:06:27.895 "framework_get_config", 00:06:27.895 "framework_get_subsystems", 00:06:27.895 "fsdev_set_opts", 00:06:27.895 "fsdev_get_opts", 00:06:27.895 "trace_get_info", 00:06:27.895 "trace_get_tpoint_group_mask", 00:06:27.895 "trace_disable_tpoint_group", 00:06:27.895 "trace_enable_tpoint_group", 00:06:27.895 "trace_clear_tpoint_mask", 00:06:27.895 "trace_set_tpoint_mask", 00:06:27.895 "notify_get_notifications", 00:06:27.895 "notify_get_types", 00:06:27.895 "spdk_get_version", 00:06:27.895 "rpc_get_methods" 00:06:27.895 ] 00:06:27.895 09:24:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:27.895 09:24:14 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:27.895 09:24:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.895 09:24:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:27.895 09:24:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 101322 00:06:27.895 09:24:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 101322 ']' 00:06:27.895 09:24:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 101322 00:06:27.895 09:24:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:27.895 09:24:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.895 09:24:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101322 00:06:28.156 09:24:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.156 09:24:14 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.156 09:24:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101322' 00:06:28.156 killing process with pid 101322 00:06:28.156 09:24:14 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 101322 00:06:28.156 09:24:14 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 101322 00:06:28.156 00:06:28.156 real 0m1.541s 00:06:28.156 user 0m2.830s 00:06:28.156 sys 0m0.467s 00:06:28.156 09:24:14 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.156 09:24:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.156 ************************************ 00:06:28.156 END TEST spdkcli_tcp 00:06:28.157 ************************************ 00:06:28.157 09:24:14 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:28.157 09:24:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.157 09:24:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.157 09:24:14 -- common/autotest_common.sh@10 -- # set +x 00:06:28.419 ************************************ 00:06:28.419 START TEST dpdk_mem_utility 00:06:28.419 ************************************ 00:06:28.419 09:24:14 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:28.419 * Looking for test storage... 00:06:28.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:28.419 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:28.419 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:28.419 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:28.419 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.419 09:24:15 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:28.419 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.419 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:28.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.419 --rc genhtml_branch_coverage=1 00:06:28.419 --rc genhtml_function_coverage=1 00:06:28.419 --rc genhtml_legend=1 00:06:28.419 --rc geninfo_all_blocks=1 00:06:28.419 --rc geninfo_unexecuted_blocks=1 00:06:28.419 00:06:28.419 ' 00:06:28.419 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:28.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.419 --rc genhtml_branch_coverage=1 00:06:28.419 --rc genhtml_function_coverage=1 00:06:28.419 --rc genhtml_legend=1 00:06:28.419 --rc geninfo_all_blocks=1 00:06:28.419 --rc geninfo_unexecuted_blocks=1 00:06:28.419 00:06:28.419 ' 00:06:28.419 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:28.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.419 --rc genhtml_branch_coverage=1 00:06:28.419 --rc genhtml_function_coverage=1 00:06:28.419 --rc genhtml_legend=1 00:06:28.419 --rc geninfo_all_blocks=1 00:06:28.419 --rc geninfo_unexecuted_blocks=1 00:06:28.419 00:06:28.419 ' 00:06:28.419 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:28.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.419 --rc genhtml_branch_coverage=1 00:06:28.419 --rc genhtml_function_coverage=1 00:06:28.419 --rc genhtml_legend=1 00:06:28.419 --rc geninfo_all_blocks=1 00:06:28.419 --rc geninfo_unexecuted_blocks=1 00:06:28.419 00:06:28.419 ' 00:06:28.419 09:24:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:28.419 09:24:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=101686 00:06:28.419 09:24:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 101686 00:06:28.419 09:24:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.419 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 101686 ']' 00:06:28.419 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.420 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.420 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.420 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.420 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:28.682 [2024-11-19 09:24:15.176484] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:28.682 [2024-11-19 09:24:15.176558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101686 ] 00:06:28.682 [2024-11-19 09:24:15.264299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.682 [2024-11-19 09:24:15.299448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.254 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.254 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:29.254 09:24:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:29.254 09:24:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:29.254 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.254 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:29.254 { 00:06:29.254 "filename": "/tmp/spdk_mem_dump.txt" 00:06:29.254 } 00:06:29.254 09:24:15 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.254 09:24:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:29.516 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:29.516 1 heaps totaling size 810.000000 MiB 00:06:29.516 size: 810.000000 MiB heap id: 0 00:06:29.516 end heaps---------- 00:06:29.516 9 mempools totaling size 595.772034 MiB 00:06:29.516 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:29.516 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:29.516 size: 92.545471 MiB name: bdev_io_101686 00:06:29.516 size: 50.003479 MiB name: msgpool_101686 00:06:29.516 size: 36.509338 MiB name: fsdev_io_101686 00:06:29.516 size: 21.763794 MiB name: PDU_Pool 00:06:29.516 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:29.516 size: 4.133484 MiB name: evtpool_101686 00:06:29.516 size: 0.026123 MiB name: Session_Pool 00:06:29.516 end mempools------- 00:06:29.516 6 memzones totaling size 4.142822 MiB 00:06:29.516 size: 1.000366 MiB name: RG_ring_0_101686 00:06:29.516 size: 1.000366 MiB name: RG_ring_1_101686 00:06:29.516 size: 1.000366 MiB name: RG_ring_4_101686 00:06:29.516 size: 1.000366 MiB name: RG_ring_5_101686 00:06:29.516 size: 0.125366 MiB name: RG_ring_2_101686 00:06:29.516 size: 0.015991 MiB name: RG_ring_3_101686 00:06:29.516 end memzones------- 00:06:29.516 09:24:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:29.516 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:29.516 list of free elements. size: 10.862488 MiB 00:06:29.516 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:29.516 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:29.516 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:29.516 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:29.516 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:29.516 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:29.516 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:29.516 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:29.516 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:29.516 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:29.516 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:29.516 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:29.516 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:29.516 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:29.516 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:29.516 list of standard malloc elements. size: 199.218628 MiB 00:06:29.516 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:29.516 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:29.516 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:29.516 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:29.516 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:29.516 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:29.516 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:29.516 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:29.516 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:29.516 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:29.516 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:29.516 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:29.516 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:29.516 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:29.516 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:29.516 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:29.516 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:29.516 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:29.516 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:29.516 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:29.516 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:29.516 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:29.516 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:29.516 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:29.516 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:29.516 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:29.516 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:29.516 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:29.516 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:29.516 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:29.516 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:29.516 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:29.516 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:29.516 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:29.516 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:29.516 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:29.516 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:29.516 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:29.516 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:29.516 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:29.516 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:29.516 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:29.516 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:29.516 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:29.516 list of memzone associated elements. size: 599.918884 MiB 00:06:29.516 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:29.516 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:29.516 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:29.516 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:29.516 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:29.516 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_101686_0 00:06:29.516 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:29.516 associated memzone info: size: 48.002930 MiB name: MP_msgpool_101686_0 00:06:29.516 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:29.516 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_101686_0 00:06:29.516 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:29.516 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:29.516 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:29.516 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:29.516 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:29.516 associated memzone info: size: 3.000122 MiB name: MP_evtpool_101686_0 00:06:29.516 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:29.516 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_101686 00:06:29.516 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:29.516 associated memzone info: size: 1.007996 MiB name: MP_evtpool_101686 00:06:29.516 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:29.516 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:29.516 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:29.516 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:29.516 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:29.516 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:29.516 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:29.516 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:29.516 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:29.516 associated memzone info: size: 1.000366 MiB name: RG_ring_0_101686 00:06:29.516 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:29.516 associated memzone info: size: 1.000366 MiB name: RG_ring_1_101686 00:06:29.516 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:29.516 associated memzone info: size: 1.000366 MiB name: RG_ring_4_101686 00:06:29.516 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:29.516 associated memzone info: size: 1.000366 MiB name: RG_ring_5_101686 00:06:29.516 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:29.516 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_101686 00:06:29.516 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:29.516 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_101686 00:06:29.516 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:29.516 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:29.516 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:29.516 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:29.516 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:29.516 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:29.516 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:29.516 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_101686 00:06:29.516 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:29.516 associated memzone info: size: 0.125366 MiB name: RG_ring_2_101686 00:06:29.516 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:29.516 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:29.517 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:29.517 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:29.517 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:29.517 associated memzone info: size: 0.015991 MiB name: RG_ring_3_101686 00:06:29.517 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:29.517 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:29.517 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:29.517 associated memzone info: size: 0.000183 MiB name: MP_msgpool_101686 00:06:29.517 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:29.517 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_101686 00:06:29.517 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:29.517 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_101686 00:06:29.517 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:29.517 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:29.517 09:24:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:29.517 09:24:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 101686 00:06:29.517 09:24:16 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 101686 ']' 00:06:29.517 09:24:16 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 101686 00:06:29.517 09:24:16 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:29.517 09:24:16 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.517 09:24:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101686 00:06:29.517 09:24:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.517 09:24:16 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.517 09:24:16 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101686' 00:06:29.517 killing process with pid 101686 00:06:29.517 09:24:16 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 101686 00:06:29.517 09:24:16 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 101686 00:06:29.778 00:06:29.778 real 0m1.396s 00:06:29.778 user 0m1.455s 00:06:29.778 sys 0m0.423s 00:06:29.779 09:24:16 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.779 09:24:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:29.779 ************************************ 00:06:29.779 END TEST dpdk_mem_utility 00:06:29.779 ************************************ 00:06:29.779 09:24:16 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:29.779 09:24:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.779 09:24:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.779 09:24:16 -- common/autotest_common.sh@10 -- # set +x 00:06:29.779 ************************************ 00:06:29.779 START TEST event 00:06:29.779 ************************************ 00:06:29.779 09:24:16 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:29.779 * Looking for test storage... 00:06:29.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:29.779 09:24:16 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.779 09:24:16 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.779 09:24:16 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.040 09:24:16 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.040 09:24:16 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.040 09:24:16 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.040 09:24:16 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.040 09:24:16 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.040 09:24:16 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.040 09:24:16 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.040 09:24:16 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.040 09:24:16 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.040 09:24:16 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.040 09:24:16 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.040 09:24:16 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.040 09:24:16 event -- scripts/common.sh@344 -- # case "$op" in 00:06:30.040 09:24:16 event -- scripts/common.sh@345 -- # : 1 00:06:30.040 09:24:16 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.040 09:24:16 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.040 09:24:16 event -- scripts/common.sh@365 -- # decimal 1 00:06:30.040 09:24:16 event -- scripts/common.sh@353 -- # local d=1 00:06:30.040 09:24:16 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.040 09:24:16 event -- scripts/common.sh@355 -- # echo 1 00:06:30.040 09:24:16 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.040 09:24:16 event -- scripts/common.sh@366 -- # decimal 2 00:06:30.040 09:24:16 event -- scripts/common.sh@353 -- # local d=2 00:06:30.040 09:24:16 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.040 09:24:16 event -- scripts/common.sh@355 -- # echo 2 00:06:30.040 09:24:16 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.040 09:24:16 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.040 09:24:16 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.040 09:24:16 event -- scripts/common.sh@368 -- # return 0 00:06:30.040 09:24:16 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.040 09:24:16 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.040 --rc genhtml_branch_coverage=1 00:06:30.040 --rc genhtml_function_coverage=1 00:06:30.040 --rc genhtml_legend=1 00:06:30.040 --rc geninfo_all_blocks=1 00:06:30.040 --rc geninfo_unexecuted_blocks=1 00:06:30.040 00:06:30.040 ' 00:06:30.040 09:24:16 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.040 --rc genhtml_branch_coverage=1 00:06:30.040 --rc genhtml_function_coverage=1 00:06:30.040 --rc genhtml_legend=1 00:06:30.040 --rc geninfo_all_blocks=1 00:06:30.040 --rc geninfo_unexecuted_blocks=1 00:06:30.040 00:06:30.040 ' 00:06:30.040 09:24:16 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.040 --rc genhtml_branch_coverage=1 00:06:30.040 --rc genhtml_function_coverage=1 00:06:30.040 --rc genhtml_legend=1 00:06:30.040 --rc geninfo_all_blocks=1 00:06:30.040 --rc geninfo_unexecuted_blocks=1 00:06:30.040 00:06:30.040 ' 00:06:30.040 09:24:16 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.040 --rc genhtml_branch_coverage=1 00:06:30.040 --rc genhtml_function_coverage=1 00:06:30.040 --rc genhtml_legend=1 00:06:30.040 --rc geninfo_all_blocks=1 00:06:30.040 --rc geninfo_unexecuted_blocks=1 00:06:30.040 00:06:30.040 ' 00:06:30.040 09:24:16 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:30.040 09:24:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:30.041 09:24:16 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:30.041 09:24:16 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:30.041 09:24:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.041 09:24:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.041 ************************************ 00:06:30.041 START TEST event_perf 00:06:30.041 ************************************ 00:06:30.041 09:24:16 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:30.041 Running I/O for 1 seconds...[2024-11-19 09:24:16.649333] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:30.041 [2024-11-19 09:24:16.649428] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101999 ] 00:06:30.041 [2024-11-19 09:24:16.750834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.302 [2024-11-19 09:24:16.788238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.302 [2024-11-19 09:24:16.788400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.302 [2024-11-19 09:24:16.788635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.302 Running I/O for 1 seconds...[2024-11-19 09:24:16.788635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.245 00:06:31.245 lcore 0: 176377 00:06:31.245 lcore 1: 176379 00:06:31.245 lcore 2: 176379 00:06:31.245 lcore 3: 176377 00:06:31.245 done. 00:06:31.245 00:06:31.245 real 0m1.189s 00:06:31.245 user 0m4.095s 00:06:31.245 sys 0m0.092s 00:06:31.245 09:24:17 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.245 09:24:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:31.245 ************************************ 00:06:31.245 END TEST event_perf 00:06:31.245 ************************************ 00:06:31.245 09:24:17 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:31.245 09:24:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:31.245 09:24:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.245 09:24:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.245 ************************************ 00:06:31.245 START TEST event_reactor 00:06:31.245 ************************************ 00:06:31.245 09:24:17 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:31.245 [2024-11-19 09:24:17.913712] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:31.245 [2024-11-19 09:24:17.913816] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102280 ] 00:06:31.506 [2024-11-19 09:24:18.012478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.506 [2024-11-19 09:24:18.050140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.450 test_start 00:06:32.450 oneshot 00:06:32.450 tick 100 00:06:32.450 tick 100 00:06:32.450 tick 250 00:06:32.450 tick 100 00:06:32.450 tick 100 00:06:32.450 tick 100 00:06:32.450 tick 250 00:06:32.450 tick 500 00:06:32.450 tick 100 00:06:32.450 tick 100 00:06:32.450 tick 250 00:06:32.450 tick 100 00:06:32.450 tick 100 00:06:32.450 test_end 00:06:32.450 00:06:32.450 real 0m1.183s 00:06:32.450 user 0m1.087s 00:06:32.450 sys 0m0.091s 00:06:32.450 09:24:19 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.450 09:24:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:32.450 ************************************ 00:06:32.450 END TEST event_reactor 00:06:32.450 ************************************ 00:06:32.450 09:24:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.450 09:24:19 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:32.450 09:24:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.450 09:24:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.450 ************************************ 00:06:32.450 START TEST event_reactor_perf 00:06:32.450 ************************************ 00:06:32.450 09:24:19 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.450 [2024-11-19 09:24:19.175199] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:32.450 [2024-11-19 09:24:19.175304] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102630 ] 00:06:32.710 [2024-11-19 09:24:19.274559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.710 [2024-11-19 09:24:19.311402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.654 test_start 00:06:33.654 test_end 00:06:33.654 Performance: 534067 events per second 00:06:33.654 00:06:33.654 real 0m1.184s 00:06:33.654 user 0m1.079s 00:06:33.654 sys 0m0.101s 00:06:33.654 09:24:20 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.654 09:24:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.654 ************************************ 00:06:33.654 END TEST event_reactor_perf 00:06:33.654 ************************************ 00:06:33.654 09:24:20 event -- event/event.sh@49 -- # uname -s 00:06:33.654 09:24:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:33.654 09:24:20 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:33.654 09:24:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.654 09:24:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.654 09:24:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.916 ************************************ 00:06:33.916 START TEST event_scheduler 00:06:33.916 ************************************ 00:06:33.916 09:24:20 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:33.916 * Looking for test storage... 00:06:33.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:33.916 09:24:20 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.916 09:24:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.916 09:24:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.916 09:24:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.916 09:24:20 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:33.916 09:24:20 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.916 09:24:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.916 --rc genhtml_branch_coverage=1 00:06:33.916 --rc genhtml_function_coverage=1 00:06:33.916 --rc genhtml_legend=1 00:06:33.916 --rc geninfo_all_blocks=1 00:06:33.916 --rc geninfo_unexecuted_blocks=1 00:06:33.916 00:06:33.916 ' 00:06:33.916 09:24:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.916 --rc genhtml_branch_coverage=1 00:06:33.916 --rc genhtml_function_coverage=1 00:06:33.916 --rc genhtml_legend=1 00:06:33.916 --rc geninfo_all_blocks=1 00:06:33.916 --rc geninfo_unexecuted_blocks=1 00:06:33.916 00:06:33.916 ' 00:06:33.916 09:24:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.916 --rc genhtml_branch_coverage=1 00:06:33.916 --rc genhtml_function_coverage=1 00:06:33.916 --rc genhtml_legend=1 00:06:33.916 --rc geninfo_all_blocks=1 00:06:33.916 --rc geninfo_unexecuted_blocks=1 00:06:33.916 00:06:33.916 ' 00:06:33.916 09:24:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.917 --rc genhtml_branch_coverage=1 00:06:33.917 --rc genhtml_function_coverage=1 00:06:33.917 --rc genhtml_legend=1 00:06:33.917 --rc geninfo_all_blocks=1 00:06:33.917 --rc geninfo_unexecuted_blocks=1 00:06:33.917 00:06:33.917 ' 00:06:33.917 09:24:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:33.917 09:24:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=103019 00:06:33.917 09:24:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.917 09:24:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:33.917 09:24:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 103019 00:06:33.917 09:24:20 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 103019 ']' 00:06:33.917 09:24:20 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.917 09:24:20 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.917 09:24:20 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.917 09:24:20 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.917 09:24:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.178 [2024-11-19 09:24:20.676236] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:34.178 [2024-11-19 09:24:20.676305] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103019 ] 00:06:34.178 [2024-11-19 09:24:20.769659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.178 [2024-11-19 09:24:20.825546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.178 [2024-11-19 09:24:20.825705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.178 [2024-11-19 09:24:20.825865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.178 [2024-11-19 09:24:20.825865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.751 09:24:21 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.751 09:24:21 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:34.751 09:24:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:34.751 09:24:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.751 09:24:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.751 [2024-11-19 09:24:21.492302] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:34.751 [2024-11-19 09:24:21.492322] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:34.751 [2024-11-19 09:24:21.492332] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:34.751 [2024-11-19 09:24:21.492339] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:34.751 [2024-11-19 09:24:21.492344] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:35.013 09:24:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.013 09:24:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:35.013 09:24:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.013 09:24:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.013 [2024-11-19 09:24:21.554465] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:35.013 09:24:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.013 09:24:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:35.013 09:24:21 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.013 09:24:21 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.013 09:24:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.013 ************************************ 00:06:35.013 START TEST scheduler_create_thread 00:06:35.013 ************************************ 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.013 2 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.013 3 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.013 4 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.013 5 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.013 6 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.013 7 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.013 8 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.013 9 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.013 09:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.587 10 00:06:35.587 09:24:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.587 09:24:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:35.587 09:24:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.587 09:24:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.975 09:24:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.975 09:24:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:36.975 09:24:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:36.975 09:24:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.975 09:24:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.918 09:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.918 09:24:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:37.918 09:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.918 09:24:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.490 09:24:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.490 09:24:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:38.490 09:24:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:38.490 09:24:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.490 09:24:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.433 09:24:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.433 00:06:39.433 real 0m4.225s 00:06:39.433 user 0m0.024s 00:06:39.433 sys 0m0.008s 00:06:39.433 09:24:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.433 09:24:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.433 ************************************ 00:06:39.433 END TEST scheduler_create_thread 00:06:39.433 ************************************ 00:06:39.433 09:24:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:39.433 09:24:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 103019 00:06:39.433 09:24:25 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 103019 ']' 00:06:39.433 09:24:25 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 103019 00:06:39.433 09:24:25 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:39.433 09:24:25 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.433 09:24:25 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103019 00:06:39.433 09:24:25 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:39.433 09:24:25 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:39.433 09:24:25 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103019' 00:06:39.433 killing process with pid 103019 00:06:39.433 09:24:25 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 103019 00:06:39.433 09:24:25 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 103019 00:06:39.433 [2024-11-19 09:24:26.100185] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:39.695 00:06:39.695 real 0m5.841s 00:06:39.695 user 0m12.872s 00:06:39.695 sys 0m0.431s 00:06:39.695 09:24:26 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.695 09:24:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:39.695 ************************************ 00:06:39.695 END TEST event_scheduler 00:06:39.695 ************************************ 00:06:39.695 09:24:26 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:39.695 09:24:26 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:39.695 09:24:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.695 09:24:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.695 09:24:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.695 ************************************ 00:06:39.695 START TEST app_repeat 00:06:39.695 ************************************ 00:06:39.695 09:24:26 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:39.695 09:24:26 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.695 09:24:26 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.695 09:24:26 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:39.695 09:24:26 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.695 09:24:26 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:39.695 09:24:26 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:39.695 09:24:26 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:39.695 09:24:26 event.app_repeat -- event/event.sh@19 -- # repeat_pid=104085 00:06:39.695 09:24:26 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.695 09:24:26 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:39.695 09:24:26 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 104085' 00:06:39.695 Process app_repeat pid: 104085 00:06:39.695 09:24:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:39.695 09:24:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:39.695 spdk_app_start Round 0 00:06:39.695 09:24:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 104085 /var/tmp/spdk-nbd.sock 00:06:39.695 09:24:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 104085 ']' 00:06:39.695 09:24:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.695 09:24:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.695 09:24:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.695 09:24:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.695 09:24:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.695 [2024-11-19 09:24:26.382739] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:39.695 [2024-11-19 09:24:26.382806] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104085 ] 00:06:39.956 [2024-11-19 09:24:26.469666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.956 [2024-11-19 09:24:26.502485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.956 [2024-11-19 09:24:26.502485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.956 09:24:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.956 09:24:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:39.956 09:24:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.218 Malloc0 00:06:40.218 09:24:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.218 Malloc1 00:06:40.479 09:24:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.479 09:24:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.479 09:24:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.479 09:24:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.479 09:24:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.479 09:24:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.479 09:24:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.479 09:24:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.479 09:24:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.479 09:24:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.479 09:24:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.479 09:24:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.479 09:24:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:40.479 09:24:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.479 09:24:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.479 09:24:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:40.479 /dev/nbd0 00:06:40.479 09:24:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.479 09:24:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.479 09:24:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:40.479 09:24:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:40.479 09:24:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:40.479 09:24:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:40.479 09:24:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:40.479 09:24:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:40.479 09:24:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:40.479 09:24:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:40.479 09:24:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.479 1+0 records in 00:06:40.479 1+0 records out 00:06:40.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291734 s, 14.0 MB/s 00:06:40.479 09:24:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.479 09:24:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:40.479 09:24:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.479 09:24:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:40.741 09:24:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:40.741 09:24:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.741 09:24:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.741 09:24:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:40.741 /dev/nbd1 00:06:40.741 09:24:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:40.741 09:24:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:40.741 09:24:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:40.741 09:24:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:40.741 09:24:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:40.741 09:24:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:40.741 09:24:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:40.741 09:24:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:40.741 09:24:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:40.741 09:24:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:40.741 09:24:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.741 1+0 records in 00:06:40.741 1+0 records out 00:06:40.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164036 s, 25.0 MB/s 00:06:40.741 09:24:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.741 09:24:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:40.741 09:24:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.741 09:24:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:40.741 09:24:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:40.741 09:24:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.741 09:24:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.741 09:24:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.741 09:24:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.741 09:24:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.003 { 00:06:41.003 "nbd_device": "/dev/nbd0", 00:06:41.003 "bdev_name": "Malloc0" 00:06:41.003 }, 00:06:41.003 { 00:06:41.003 "nbd_device": "/dev/nbd1", 00:06:41.003 "bdev_name": "Malloc1" 00:06:41.003 } 00:06:41.003 ]' 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.003 { 00:06:41.003 "nbd_device": "/dev/nbd0", 00:06:41.003 "bdev_name": "Malloc0" 00:06:41.003 }, 00:06:41.003 { 00:06:41.003 "nbd_device": "/dev/nbd1", 00:06:41.003 "bdev_name": "Malloc1" 00:06:41.003 } 00:06:41.003 ]' 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.003 /dev/nbd1' 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.003 /dev/nbd1' 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.003 256+0 records in 00:06:41.003 256+0 records out 00:06:41.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011731 s, 89.4 MB/s 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.003 256+0 records in 00:06:41.003 256+0 records out 00:06:41.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124284 s, 84.4 MB/s 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.003 09:24:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.264 256+0 records in 00:06:41.264 256+0 records out 00:06:41.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127032 s, 82.5 MB/s 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.264 09:24:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.526 09:24:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.526 09:24:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.526 09:24:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.526 09:24:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.526 09:24:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.526 09:24:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.526 09:24:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.526 09:24:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.526 09:24:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.526 09:24:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.526 09:24:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.786 09:24:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.786 09:24:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.786 09:24:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.786 09:24:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.786 09:24:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.786 09:24:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.786 09:24:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:41.786 09:24:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.786 09:24:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.786 09:24:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.786 09:24:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.786 09:24:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.786 09:24:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.048 09:24:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:42.048 [2024-11-19 09:24:28.639018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.048 [2024-11-19 09:24:28.669745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.048 [2024-11-19 09:24:28.669745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.048 [2024-11-19 09:24:28.698846] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:42.048 [2024-11-19 09:24:28.698876] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.351 09:24:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:45.351 09:24:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:45.351 spdk_app_start Round 1 00:06:45.351 09:24:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 104085 /var/tmp/spdk-nbd.sock 00:06:45.351 09:24:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 104085 ']' 00:06:45.351 09:24:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.351 09:24:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.351 09:24:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.351 09:24:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.351 09:24:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.351 09:24:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.351 09:24:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:45.351 09:24:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.351 Malloc0 00:06:45.351 09:24:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.351 Malloc1 00:06:45.351 09:24:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.351 09:24:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.351 09:24:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.351 09:24:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.351 09:24:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.351 09:24:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.351 09:24:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.352 09:24:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.352 09:24:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.352 09:24:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.352 09:24:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.352 09:24:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.352 09:24:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:45.352 09:24:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.352 09:24:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.352 09:24:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:45.612 /dev/nbd0 00:06:45.612 09:24:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.612 09:24:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.612 09:24:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:45.612 09:24:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:45.613 09:24:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.613 09:24:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.613 09:24:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:45.613 09:24:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:45.613 09:24:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.613 09:24:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.613 09:24:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.613 1+0 records in 00:06:45.613 1+0 records out 00:06:45.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308523 s, 13.3 MB/s 00:06:45.613 09:24:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.613 09:24:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:45.613 09:24:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.613 09:24:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.613 09:24:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:45.613 09:24:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.613 09:24:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.613 09:24:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.874 /dev/nbd1 00:06:45.874 09:24:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.874 09:24:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.874 09:24:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:45.874 09:24:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:45.874 09:24:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.874 09:24:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.874 09:24:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:45.874 09:24:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:45.874 09:24:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.874 09:24:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.874 09:24:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.874 1+0 records in 00:06:45.874 1+0 records out 00:06:45.874 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028768 s, 14.2 MB/s 00:06:45.874 09:24:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.874 09:24:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:45.874 09:24:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.874 09:24:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.874 09:24:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:45.874 09:24:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.874 09:24:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.874 09:24:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.874 09:24:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.874 09:24:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.135 { 00:06:46.135 "nbd_device": "/dev/nbd0", 00:06:46.135 "bdev_name": "Malloc0" 00:06:46.135 }, 00:06:46.135 { 00:06:46.135 "nbd_device": "/dev/nbd1", 00:06:46.135 "bdev_name": "Malloc1" 00:06:46.135 } 00:06:46.135 ]' 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.135 { 00:06:46.135 "nbd_device": "/dev/nbd0", 00:06:46.135 "bdev_name": "Malloc0" 00:06:46.135 }, 00:06:46.135 { 00:06:46.135 "nbd_device": "/dev/nbd1", 00:06:46.135 "bdev_name": "Malloc1" 00:06:46.135 } 00:06:46.135 ]' 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.135 /dev/nbd1' 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.135 /dev/nbd1' 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.135 256+0 records in 00:06:46.135 256+0 records out 00:06:46.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127183 s, 82.4 MB/s 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.135 256+0 records in 00:06:46.135 256+0 records out 00:06:46.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012168 s, 86.2 MB/s 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.135 256+0 records in 00:06:46.135 256+0 records out 00:06:46.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131584 s, 79.7 MB/s 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.135 09:24:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.136 09:24:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.136 09:24:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:46.136 09:24:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.136 09:24:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:46.136 09:24:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.136 09:24:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:46.136 09:24:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.136 09:24:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.136 09:24:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.136 09:24:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:46.136 09:24:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.136 09:24:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.396 09:24:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.396 09:24:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.396 09:24:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.396 09:24:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.396 09:24:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.396 09:24:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.396 09:24:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.396 09:24:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.396 09:24:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.396 09:24:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.657 09:24:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.658 09:24:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.658 09:24:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.658 09:24:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.658 09:24:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.658 09:24:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.658 09:24:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.658 09:24:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.658 09:24:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.658 09:24:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.658 09:24:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.918 09:24:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.918 09:24:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.918 09:24:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.918 09:24:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.918 09:24:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.918 09:24:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.918 09:24:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:46.918 09:24:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.918 09:24:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.918 09:24:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.918 09:24:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.918 09:24:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.918 09:24:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.919 09:24:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:47.179 [2024-11-19 09:24:33.725592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.179 [2024-11-19 09:24:33.756592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.179 [2024-11-19 09:24:33.756592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.179 [2024-11-19 09:24:33.786045] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.179 [2024-11-19 09:24:33.786075] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:50.480 09:24:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:50.480 09:24:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:50.480 spdk_app_start Round 2 00:06:50.480 09:24:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 104085 /var/tmp/spdk-nbd.sock 00:06:50.480 09:24:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 104085 ']' 00:06:50.480 09:24:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.480 09:24:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.480 09:24:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.480 09:24:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.480 09:24:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.480 09:24:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.480 09:24:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:50.480 09:24:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.480 Malloc0 00:06:50.480 09:24:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.480 Malloc1 00:06:50.480 09:24:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.480 09:24:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.480 09:24:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.480 09:24:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:50.480 09:24:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.480 09:24:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:50.480 09:24:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.480 09:24:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.480 09:24:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.480 09:24:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.480 09:24:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.480 09:24:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.480 09:24:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:50.480 09:24:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.480 09:24:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.480 09:24:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:50.742 /dev/nbd0 00:06:50.742 09:24:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:50.742 09:24:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:50.742 09:24:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:50.742 09:24:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:50.742 09:24:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:50.742 09:24:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:50.742 09:24:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:50.742 09:24:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:50.742 09:24:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:50.742 09:24:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:50.742 09:24:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.742 1+0 records in 00:06:50.742 1+0 records out 00:06:50.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285165 s, 14.4 MB/s 00:06:50.742 09:24:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.742 09:24:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:50.742 09:24:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.742 09:24:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:50.742 09:24:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:50.742 09:24:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.742 09:24:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.742 09:24:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:51.004 /dev/nbd1 00:06:51.004 09:24:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.004 09:24:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.004 09:24:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:51.004 09:24:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:51.004 09:24:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.004 09:24:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.004 09:24:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:51.004 09:24:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:51.004 09:24:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.004 09:24:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.004 09:24:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.004 1+0 records in 00:06:51.004 1+0 records out 00:06:51.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162272 s, 25.2 MB/s 00:06:51.004 09:24:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.004 09:24:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:51.004 09:24:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.004 09:24:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.004 09:24:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:51.004 09:24:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.004 09:24:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.004 09:24:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.004 09:24:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.004 09:24:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.266 { 00:06:51.266 "nbd_device": "/dev/nbd0", 00:06:51.266 "bdev_name": "Malloc0" 00:06:51.266 }, 00:06:51.266 { 00:06:51.266 "nbd_device": "/dev/nbd1", 00:06:51.266 "bdev_name": "Malloc1" 00:06:51.266 } 00:06:51.266 ]' 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.266 { 00:06:51.266 "nbd_device": "/dev/nbd0", 00:06:51.266 "bdev_name": "Malloc0" 00:06:51.266 }, 00:06:51.266 { 00:06:51.266 "nbd_device": "/dev/nbd1", 00:06:51.266 "bdev_name": "Malloc1" 00:06:51.266 } 00:06:51.266 ]' 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:51.266 /dev/nbd1' 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:51.266 /dev/nbd1' 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:51.266 256+0 records in 00:06:51.266 256+0 records out 00:06:51.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127248 s, 82.4 MB/s 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:51.266 256+0 records in 00:06:51.266 256+0 records out 00:06:51.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121846 s, 86.1 MB/s 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:51.266 256+0 records in 00:06:51.266 256+0 records out 00:06:51.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130556 s, 80.3 MB/s 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:51.266 09:24:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:51.267 09:24:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.267 09:24:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:51.267 09:24:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.267 09:24:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:51.267 09:24:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.267 09:24:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:51.267 09:24:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.267 09:24:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.267 09:24:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.267 09:24:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:51.267 09:24:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.267 09:24:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:51.528 09:24:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.528 09:24:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.528 09:24:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.528 09:24:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.528 09:24:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.528 09:24:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.528 09:24:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.528 09:24:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.528 09:24:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.528 09:24:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:51.789 09:24:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:51.789 09:24:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:51.789 09:24:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:51.789 09:24:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.789 09:24:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.789 09:24:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:51.789 09:24:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.789 09:24:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.789 09:24:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.789 09:24:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.789 09:24:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.789 09:24:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.050 09:24:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.050 09:24:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.050 09:24:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.050 09:24:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.050 09:24:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.050 09:24:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:52.050 09:24:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.050 09:24:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.050 09:24:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.050 09:24:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.050 09:24:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.050 09:24:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.050 09:24:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:52.311 [2024-11-19 09:24:38.835775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.311 [2024-11-19 09:24:38.865367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.311 [2024-11-19 09:24:38.865367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.311 [2024-11-19 09:24:38.894481] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:52.311 [2024-11-19 09:24:38.894512] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:55.623 09:24:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 104085 /var/tmp/spdk-nbd.sock 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 104085 ']' 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:55.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:55.623 09:24:41 event.app_repeat -- event/event.sh@39 -- # killprocess 104085 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 104085 ']' 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 104085 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104085 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104085' 00:06:55.623 killing process with pid 104085 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@973 -- # kill 104085 00:06:55.623 09:24:41 event.app_repeat -- common/autotest_common.sh@978 -- # wait 104085 00:06:55.623 spdk_app_start is called in Round 0. 00:06:55.623 Shutdown signal received, stop current app iteration 00:06:55.623 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:06:55.623 spdk_app_start is called in Round 1. 00:06:55.623 Shutdown signal received, stop current app iteration 00:06:55.623 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:06:55.623 spdk_app_start is called in Round 2. 00:06:55.623 Shutdown signal received, stop current app iteration 00:06:55.623 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:06:55.623 spdk_app_start is called in Round 3. 00:06:55.623 Shutdown signal received, stop current app iteration 00:06:55.623 09:24:42 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:55.623 09:24:42 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:55.623 00:06:55.623 real 0m15.718s 00:06:55.623 user 0m34.459s 00:06:55.623 sys 0m2.293s 00:06:55.623 09:24:42 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.623 09:24:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:55.623 ************************************ 00:06:55.623 END TEST app_repeat 00:06:55.623 ************************************ 00:06:55.623 09:24:42 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:55.623 09:24:42 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:55.623 09:24:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.623 09:24:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.623 09:24:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.623 ************************************ 00:06:55.623 START TEST cpu_locks 00:06:55.623 ************************************ 00:06:55.623 09:24:42 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:55.623 * Looking for test storage... 00:06:55.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:55.623 09:24:42 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.623 09:24:42 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.623 09:24:42 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.623 09:24:42 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.623 09:24:42 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:55.623 09:24:42 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.623 09:24:42 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.623 --rc genhtml_branch_coverage=1 00:06:55.623 --rc genhtml_function_coverage=1 00:06:55.623 --rc genhtml_legend=1 00:06:55.623 --rc geninfo_all_blocks=1 00:06:55.623 --rc geninfo_unexecuted_blocks=1 00:06:55.623 00:06:55.623 ' 00:06:55.623 09:24:42 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.623 --rc genhtml_branch_coverage=1 00:06:55.623 --rc genhtml_function_coverage=1 00:06:55.623 --rc genhtml_legend=1 00:06:55.623 --rc geninfo_all_blocks=1 00:06:55.623 --rc geninfo_unexecuted_blocks=1 00:06:55.623 00:06:55.623 ' 00:06:55.623 09:24:42 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.623 --rc genhtml_branch_coverage=1 00:06:55.623 --rc genhtml_function_coverage=1 00:06:55.623 --rc genhtml_legend=1 00:06:55.623 --rc geninfo_all_blocks=1 00:06:55.623 --rc geninfo_unexecuted_blocks=1 00:06:55.623 00:06:55.623 ' 00:06:55.623 09:24:42 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.623 --rc genhtml_branch_coverage=1 00:06:55.623 --rc genhtml_function_coverage=1 00:06:55.623 --rc genhtml_legend=1 00:06:55.623 --rc geninfo_all_blocks=1 00:06:55.623 --rc geninfo_unexecuted_blocks=1 00:06:55.623 00:06:55.623 ' 00:06:55.623 09:24:42 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:55.623 09:24:42 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:55.623 09:24:42 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:55.623 09:24:42 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:55.623 09:24:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.623 09:24:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.623 09:24:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.888 ************************************ 00:06:55.888 START TEST default_locks 00:06:55.888 ************************************ 00:06:55.888 09:24:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:55.888 09:24:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=107672 00:06:55.888 09:24:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 107672 00:06:55.888 09:24:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.889 09:24:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 107672 ']' 00:06:55.889 09:24:42 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.889 09:24:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.889 09:24:42 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.889 09:24:42 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.889 09:24:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.889 [2024-11-19 09:24:42.446581] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:55.889 [2024-11-19 09:24:42.446640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107672 ] 00:06:55.889 [2024-11-19 09:24:42.532939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.889 [2024-11-19 09:24:42.567487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.847 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.847 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:56.847 09:24:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 107672 00:06:56.847 09:24:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 107672 00:06:56.847 09:24:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.108 lslocks: write error 00:06:57.108 09:24:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 107672 00:06:57.108 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 107672 ']' 00:06:57.108 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 107672 00:06:57.108 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:57.108 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.108 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107672 00:06:57.108 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.108 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.108 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107672' 00:06:57.108 killing process with pid 107672 00:06:57.108 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 107672 00:06:57.108 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 107672 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 107672 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 107672 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 107672 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 107672 ']' 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (107672) - No such process 00:06:57.370 ERROR: process (pid: 107672) is no longer running 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:57.370 00:06:57.370 real 0m1.574s 00:06:57.370 user 0m1.692s 00:06:57.370 sys 0m0.528s 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.370 09:24:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.370 ************************************ 00:06:57.370 END TEST default_locks 00:06:57.370 ************************************ 00:06:57.370 09:24:43 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:57.370 09:24:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.370 09:24:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.370 09:24:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.370 ************************************ 00:06:57.370 START TEST default_locks_via_rpc 00:06:57.370 ************************************ 00:06:57.370 09:24:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:57.370 09:24:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=108035 00:06:57.370 09:24:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 108035 00:06:57.370 09:24:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.370 09:24:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 108035 ']' 00:06:57.370 09:24:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.370 09:24:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.370 09:24:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.370 09:24:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.370 09:24:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.370 [2024-11-19 09:24:44.106292] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:57.370 [2024-11-19 09:24:44.106354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108035 ] 00:06:57.632 [2024-11-19 09:24:44.192746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.632 [2024-11-19 09:24:44.227196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.205 09:24:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.205 09:24:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:58.205 09:24:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:58.205 09:24:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.205 09:24:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.205 09:24:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.205 09:24:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:58.205 09:24:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:58.205 09:24:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:58.205 09:24:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:58.205 09:24:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:58.205 09:24:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.205 09:24:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.205 09:24:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.205 09:24:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 108035 00:06:58.205 09:24:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 108035 00:06:58.205 09:24:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.778 09:24:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 108035 00:06:58.779 09:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 108035 ']' 00:06:58.779 09:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 108035 00:06:58.779 09:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:58.779 09:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.779 09:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108035 00:06:58.779 09:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.779 09:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.779 09:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108035' 00:06:58.779 killing process with pid 108035 00:06:58.779 09:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 108035 00:06:58.779 09:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 108035 00:06:59.040 00:06:59.040 real 0m1.557s 00:06:59.040 user 0m1.661s 00:06:59.040 sys 0m0.548s 00:06:59.040 09:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.040 09:24:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.040 ************************************ 00:06:59.040 END TEST default_locks_via_rpc 00:06:59.040 ************************************ 00:06:59.040 09:24:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:59.041 09:24:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.041 09:24:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.041 09:24:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.041 ************************************ 00:06:59.041 START TEST non_locking_app_on_locked_coremask 00:06:59.041 ************************************ 00:06:59.041 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:59.041 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=108377 00:06:59.041 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 108377 /var/tmp/spdk.sock 00:06:59.041 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.041 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108377 ']' 00:06:59.041 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.041 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.041 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.041 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.041 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.041 [2024-11-19 09:24:45.726587] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:59.041 [2024-11-19 09:24:45.726630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108377 ] 00:06:59.041 [2024-11-19 09:24:45.776732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.302 [2024-11-19 09:24:45.807258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.302 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.302 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:59.302 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=108410 00:06:59.302 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 108410 /var/tmp/spdk2.sock 00:06:59.302 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108410 ']' 00:06:59.302 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:59.302 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.302 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.302 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.302 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.302 09:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.564 [2024-11-19 09:24:46.053732] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:59.564 [2024-11-19 09:24:46.053781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108410 ] 00:06:59.564 [2024-11-19 09:24:46.140011] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.564 [2024-11-19 09:24:46.140037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.564 [2024-11-19 09:24:46.206185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.135 09:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.135 09:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:00.135 09:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 108377 00:07:00.135 09:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108377 00:07:00.135 09:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.708 lslocks: write error 00:07:00.708 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 108377 00:07:00.708 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108377 ']' 00:07:00.708 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 108377 00:07:00.708 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:00.708 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.708 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108377 00:07:00.708 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.708 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.708 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108377' 00:07:00.708 killing process with pid 108377 00:07:00.708 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 108377 00:07:00.708 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 108377 00:07:01.282 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 108410 00:07:01.282 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108410 ']' 00:07:01.282 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 108410 00:07:01.282 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:01.282 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.282 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108410 00:07:01.282 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.282 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.282 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108410' 00:07:01.282 killing process with pid 108410 00:07:01.282 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 108410 00:07:01.282 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 108410 00:07:01.282 00:07:01.282 real 0m2.322s 00:07:01.282 user 0m2.567s 00:07:01.282 sys 0m0.832s 00:07:01.282 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.282 09:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.282 ************************************ 00:07:01.282 END TEST non_locking_app_on_locked_coremask 00:07:01.282 ************************************ 00:07:01.544 09:24:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:01.544 09:24:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.544 09:24:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.544 09:24:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.544 ************************************ 00:07:01.544 START TEST locking_app_on_unlocked_coremask 00:07:01.544 ************************************ 00:07:01.544 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:01.544 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=108781 00:07:01.544 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 108781 /var/tmp/spdk.sock 00:07:01.544 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:01.544 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108781 ']' 00:07:01.544 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.544 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.544 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.544 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.544 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.544 [2024-11-19 09:24:48.124288] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:01.544 [2024-11-19 09:24:48.124336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108781 ] 00:07:01.544 [2024-11-19 09:24:48.209459] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:01.544 [2024-11-19 09:24:48.209484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.544 [2024-11-19 09:24:48.240279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.490 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.490 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:02.490 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=109013 00:07:02.490 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 109013 /var/tmp/spdk2.sock 00:07:02.490 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:02.490 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 109013 ']' 00:07:02.490 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.490 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.490 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.490 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.490 09:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.490 [2024-11-19 09:24:48.982633] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:02.490 [2024-11-19 09:24:48.982687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109013 ] 00:07:02.490 [2024-11-19 09:24:49.070025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.490 [2024-11-19 09:24:49.132508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.063 09:24:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.063 09:24:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:03.063 09:24:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 109013 00:07:03.063 09:24:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 109013 00:07:03.063 09:24:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.008 lslocks: write error 00:07:04.008 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 108781 00:07:04.008 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108781 ']' 00:07:04.008 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 108781 00:07:04.008 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:04.008 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.008 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108781 00:07:04.008 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.008 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.008 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108781' 00:07:04.008 killing process with pid 108781 00:07:04.008 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 108781 00:07:04.008 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 108781 00:07:04.269 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 109013 00:07:04.269 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 109013 ']' 00:07:04.269 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 109013 00:07:04.269 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:04.269 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.269 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109013 00:07:04.269 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.269 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.269 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109013' 00:07:04.269 killing process with pid 109013 00:07:04.269 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 109013 00:07:04.269 09:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 109013 00:07:04.531 00:07:04.531 real 0m3.004s 00:07:04.531 user 0m3.354s 00:07:04.531 sys 0m0.930s 00:07:04.531 09:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.531 09:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.531 ************************************ 00:07:04.531 END TEST locking_app_on_unlocked_coremask 00:07:04.531 ************************************ 00:07:04.531 09:24:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:04.531 09:24:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.531 09:24:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.531 09:24:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.531 ************************************ 00:07:04.531 START TEST locking_app_on_locked_coremask 00:07:04.531 ************************************ 00:07:04.531 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:04.531 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=109492 00:07:04.531 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 109492 /var/tmp/spdk.sock 00:07:04.531 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.531 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 109492 ']' 00:07:04.531 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.531 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.531 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.531 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.531 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.531 [2024-11-19 09:24:51.207046] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:04.531 [2024-11-19 09:24:51.207102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109492 ] 00:07:04.792 [2024-11-19 09:24:51.291777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.792 [2024-11-19 09:24:51.326459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=109572 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 109572 /var/tmp/spdk2.sock 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 109572 /var/tmp/spdk2.sock 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 109572 /var/tmp/spdk2.sock 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 109572 ']' 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.365 09:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.365 [2024-11-19 09:24:52.047442] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:05.366 [2024-11-19 09:24:52.047499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109572 ] 00:07:05.627 [2024-11-19 09:24:52.135232] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 109492 has claimed it. 00:07:05.627 [2024-11-19 09:24:52.135263] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:06.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (109572) - No such process 00:07:06.200 ERROR: process (pid: 109572) is no longer running 00:07:06.200 09:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.200 09:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:06.200 09:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:06.200 09:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:06.201 09:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:06.201 09:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:06.201 09:24:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 109492 00:07:06.201 09:24:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 109492 00:07:06.201 09:24:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.461 lslocks: write error 00:07:06.462 09:24:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 109492 00:07:06.462 09:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 109492 ']' 00:07:06.462 09:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 109492 00:07:06.462 09:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:06.462 09:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.462 09:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109492 00:07:06.462 09:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.462 09:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.462 09:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109492' 00:07:06.462 killing process with pid 109492 00:07:06.462 09:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 109492 00:07:06.462 09:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 109492 00:07:06.724 00:07:06.724 real 0m2.228s 00:07:06.724 user 0m2.506s 00:07:06.724 sys 0m0.629s 00:07:06.724 09:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.724 09:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.724 ************************************ 00:07:06.724 END TEST locking_app_on_locked_coremask 00:07:06.724 ************************************ 00:07:06.724 09:24:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:06.724 09:24:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.724 09:24:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.724 09:24:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.724 ************************************ 00:07:06.724 START TEST locking_overlapped_coremask 00:07:06.724 ************************************ 00:07:06.724 09:24:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:06.724 09:24:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=109870 00:07:06.724 09:24:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 109870 /var/tmp/spdk.sock 00:07:06.724 09:24:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:06.724 09:24:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 109870 ']' 00:07:06.724 09:24:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.724 09:24:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.724 09:24:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.724 09:24:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.724 09:24:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.985 [2024-11-19 09:24:53.514442] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:06.985 [2024-11-19 09:24:53.514499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109870 ] 00:07:06.985 [2024-11-19 09:24:53.602058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.985 [2024-11-19 09:24:53.639775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.985 [2024-11-19 09:24:53.639806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.985 [2024-11-19 09:24:53.639808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=110200 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 110200 /var/tmp/spdk2.sock 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 110200 /var/tmp/spdk2.sock 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 110200 /var/tmp/spdk2.sock 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 110200 ']' 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.929 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.929 [2024-11-19 09:24:54.374208] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:07.929 [2024-11-19 09:24:54.374263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110200 ] 00:07:07.929 [2024-11-19 09:24:54.487407] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109870 has claimed it. 00:07:07.929 [2024-11-19 09:24:54.487448] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:08.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (110200) - No such process 00:07:08.501 ERROR: process (pid: 110200) is no longer running 00:07:08.501 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.501 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:08.501 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:08.501 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.501 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:08.501 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.501 09:24:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:08.501 09:24:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:08.501 09:24:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:08.501 09:24:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:08.501 09:24:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 109870 00:07:08.501 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 109870 ']' 00:07:08.501 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 109870 00:07:08.501 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:08.501 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.501 09:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109870 00:07:08.501 09:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.501 09:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.501 09:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109870' 00:07:08.501 killing process with pid 109870 00:07:08.501 09:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 109870 00:07:08.501 09:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 109870 00:07:08.501 00:07:08.501 real 0m1.786s 00:07:08.501 user 0m5.157s 00:07:08.501 sys 0m0.398s 00:07:08.501 09:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.501 09:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.501 ************************************ 00:07:08.501 END TEST locking_overlapped_coremask 00:07:08.501 ************************************ 00:07:08.764 09:24:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:08.764 09:24:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.764 09:24:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.764 09:24:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.764 ************************************ 00:07:08.764 START TEST locking_overlapped_coremask_via_rpc 00:07:08.764 ************************************ 00:07:08.764 09:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:08.764 09:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=110274 00:07:08.764 09:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 110274 /var/tmp/spdk.sock 00:07:08.764 09:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:08.764 09:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 110274 ']' 00:07:08.764 09:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.764 09:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.764 09:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.764 09:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.764 09:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.764 [2024-11-19 09:24:55.371438] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:08.764 [2024-11-19 09:24:55.371491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110274 ] 00:07:08.764 [2024-11-19 09:24:55.456312] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.764 [2024-11-19 09:24:55.456334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.764 [2024-11-19 09:24:55.489746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.764 [2024-11-19 09:24:55.489896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.764 [2024-11-19 09:24:55.489898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.704 09:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.704 09:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:09.704 09:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=110581 00:07:09.704 09:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 110581 /var/tmp/spdk2.sock 00:07:09.704 09:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 110581 ']' 00:07:09.704 09:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:09.704 09:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.704 09:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.704 09:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.704 09:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.704 09:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.704 [2024-11-19 09:24:56.227130] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:09.704 [2024-11-19 09:24:56.227191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110581 ] 00:07:09.704 [2024-11-19 09:24:56.338375] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.704 [2024-11-19 09:24:56.338412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.704 [2024-11-19 09:24:56.411971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.704 [2024-11-19 09:24:56.415283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.704 [2024-11-19 09:24:56.415284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:10.276 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.276 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:10.276 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:10.276 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.276 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.537 [2024-11-19 09:24:57.031236] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 110274 has claimed it. 00:07:10.537 request: 00:07:10.537 { 00:07:10.537 "method": "framework_enable_cpumask_locks", 00:07:10.537 "req_id": 1 00:07:10.537 } 00:07:10.537 Got JSON-RPC error response 00:07:10.537 response: 00:07:10.537 { 00:07:10.537 "code": -32603, 00:07:10.537 "message": "Failed to claim CPU core: 2" 00:07:10.537 } 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 110274 /var/tmp/spdk.sock 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 110274 ']' 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 110581 /var/tmp/spdk2.sock 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 110581 ']' 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.537 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.798 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.798 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:10.798 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:10.798 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:10.798 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:10.798 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:10.798 00:07:10.798 real 0m2.092s 00:07:10.798 user 0m0.839s 00:07:10.798 sys 0m0.177s 00:07:10.798 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.798 09:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.798 ************************************ 00:07:10.798 END TEST locking_overlapped_coremask_via_rpc 00:07:10.798 ************************************ 00:07:10.798 09:24:57 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:10.798 09:24:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 110274 ]] 00:07:10.798 09:24:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 110274 00:07:10.798 09:24:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 110274 ']' 00:07:10.798 09:24:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 110274 00:07:10.798 09:24:57 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:10.798 09:24:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.798 09:24:57 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110274 00:07:10.798 09:24:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.798 09:24:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.798 09:24:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110274' 00:07:10.798 killing process with pid 110274 00:07:10.798 09:24:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 110274 00:07:10.798 09:24:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 110274 00:07:11.059 09:24:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 110581 ]] 00:07:11.059 09:24:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 110581 00:07:11.059 09:24:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 110581 ']' 00:07:11.059 09:24:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 110581 00:07:11.059 09:24:57 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:11.059 09:24:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.059 09:24:57 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110581 00:07:11.059 09:24:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:11.059 09:24:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:11.059 09:24:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110581' 00:07:11.059 killing process with pid 110581 00:07:11.059 09:24:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 110581 00:07:11.059 09:24:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 110581 00:07:11.320 09:24:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:11.320 09:24:57 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:11.320 09:24:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 110274 ]] 00:07:11.320 09:24:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 110274 00:07:11.320 09:24:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 110274 ']' 00:07:11.320 09:24:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 110274 00:07:11.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (110274) - No such process 00:07:11.320 09:24:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 110274 is not found' 00:07:11.320 Process with pid 110274 is not found 00:07:11.320 09:24:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 110581 ]] 00:07:11.320 09:24:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 110581 00:07:11.320 09:24:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 110581 ']' 00:07:11.320 09:24:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 110581 00:07:11.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (110581) - No such process 00:07:11.320 09:24:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 110581 is not found' 00:07:11.320 Process with pid 110581 is not found 00:07:11.320 09:24:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:11.320 00:07:11.320 real 0m15.817s 00:07:11.320 user 0m27.834s 00:07:11.320 sys 0m4.960s 00:07:11.320 09:24:57 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.320 09:24:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.320 ************************************ 00:07:11.320 END TEST cpu_locks 00:07:11.320 ************************************ 00:07:11.320 00:07:11.320 real 0m41.613s 00:07:11.320 user 1m21.743s 00:07:11.320 sys 0m8.369s 00:07:11.320 09:24:57 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.320 09:24:57 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.320 ************************************ 00:07:11.320 END TEST event 00:07:11.320 ************************************ 00:07:11.320 09:24:58 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:11.320 09:24:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.320 09:24:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.320 09:24:58 -- common/autotest_common.sh@10 -- # set +x 00:07:11.582 ************************************ 00:07:11.582 START TEST thread 00:07:11.582 ************************************ 00:07:11.582 09:24:58 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:11.582 * Looking for test storage... 00:07:11.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:11.582 09:24:58 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.582 09:24:58 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.582 09:24:58 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.582 09:24:58 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.582 09:24:58 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.582 09:24:58 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.582 09:24:58 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.582 09:24:58 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.582 09:24:58 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.582 09:24:58 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.582 09:24:58 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.582 09:24:58 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.582 09:24:58 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.582 09:24:58 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.582 09:24:58 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.582 09:24:58 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:11.582 09:24:58 thread -- scripts/common.sh@345 -- # : 1 00:07:11.582 09:24:58 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.582 09:24:58 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.582 09:24:58 thread -- scripts/common.sh@365 -- # decimal 1 00:07:11.582 09:24:58 thread -- scripts/common.sh@353 -- # local d=1 00:07:11.582 09:24:58 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.582 09:24:58 thread -- scripts/common.sh@355 -- # echo 1 00:07:11.582 09:24:58 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.582 09:24:58 thread -- scripts/common.sh@366 -- # decimal 2 00:07:11.582 09:24:58 thread -- scripts/common.sh@353 -- # local d=2 00:07:11.582 09:24:58 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.582 09:24:58 thread -- scripts/common.sh@355 -- # echo 2 00:07:11.582 09:24:58 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.582 09:24:58 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.582 09:24:58 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.582 09:24:58 thread -- scripts/common.sh@368 -- # return 0 00:07:11.582 09:24:58 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.582 09:24:58 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.582 --rc genhtml_branch_coverage=1 00:07:11.582 --rc genhtml_function_coverage=1 00:07:11.582 --rc genhtml_legend=1 00:07:11.582 --rc geninfo_all_blocks=1 00:07:11.582 --rc geninfo_unexecuted_blocks=1 00:07:11.582 00:07:11.582 ' 00:07:11.582 09:24:58 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.582 --rc genhtml_branch_coverage=1 00:07:11.582 --rc genhtml_function_coverage=1 00:07:11.582 --rc genhtml_legend=1 00:07:11.582 --rc geninfo_all_blocks=1 00:07:11.582 --rc geninfo_unexecuted_blocks=1 00:07:11.582 00:07:11.582 ' 00:07:11.582 09:24:58 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.582 --rc genhtml_branch_coverage=1 00:07:11.582 --rc genhtml_function_coverage=1 00:07:11.582 --rc genhtml_legend=1 00:07:11.582 --rc geninfo_all_blocks=1 00:07:11.582 --rc geninfo_unexecuted_blocks=1 00:07:11.582 00:07:11.582 ' 00:07:11.582 09:24:58 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.582 --rc genhtml_branch_coverage=1 00:07:11.582 --rc genhtml_function_coverage=1 00:07:11.582 --rc genhtml_legend=1 00:07:11.582 --rc geninfo_all_blocks=1 00:07:11.582 --rc geninfo_unexecuted_blocks=1 00:07:11.582 00:07:11.582 ' 00:07:11.582 09:24:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:11.582 09:24:58 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:11.582 09:24:58 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.582 09:24:58 thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.582 ************************************ 00:07:11.582 START TEST thread_poller_perf 00:07:11.582 ************************************ 00:07:11.582 09:24:58 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:11.843 [2024-11-19 09:24:58.333249] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:11.843 [2024-11-19 09:24:58.333340] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111026 ] 00:07:11.843 [2024-11-19 09:24:58.420258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.843 [2024-11-19 09:24:58.453075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.843 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:12.785 [2024-11-19T08:24:59.533Z] ====================================== 00:07:12.785 [2024-11-19T08:24:59.533Z] busy:2409466868 (cyc) 00:07:12.785 [2024-11-19T08:24:59.533Z] total_run_count: 418000 00:07:12.785 [2024-11-19T08:24:59.533Z] tsc_hz: 2400000000 (cyc) 00:07:12.785 [2024-11-19T08:24:59.533Z] ====================================== 00:07:12.785 [2024-11-19T08:24:59.533Z] poller_cost: 5764 (cyc), 2401 (nsec) 00:07:12.785 00:07:12.785 real 0m1.174s 00:07:12.785 user 0m1.099s 00:07:12.785 sys 0m0.071s 00:07:12.785 09:24:59 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.785 09:24:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:12.785 ************************************ 00:07:12.785 END TEST thread_poller_perf 00:07:12.785 ************************************ 00:07:12.785 09:24:59 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:12.785 09:24:59 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:12.785 09:24:59 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.785 09:24:59 thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.046 ************************************ 00:07:13.046 START TEST thread_poller_perf 00:07:13.046 ************************************ 00:07:13.046 09:24:59 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:13.046 [2024-11-19 09:24:59.584725] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:13.046 [2024-11-19 09:24:59.584826] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111375 ] 00:07:13.046 [2024-11-19 09:24:59.671665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.046 [2024-11-19 09:24:59.704344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.046 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:13.986 [2024-11-19T08:25:00.734Z] ====================================== 00:07:13.986 [2024-11-19T08:25:00.734Z] busy:2401396686 (cyc) 00:07:13.986 [2024-11-19T08:25:00.734Z] total_run_count: 5559000 00:07:13.986 [2024-11-19T08:25:00.734Z] tsc_hz: 2400000000 (cyc) 00:07:13.986 [2024-11-19T08:25:00.734Z] ====================================== 00:07:13.986 [2024-11-19T08:25:00.734Z] poller_cost: 431 (cyc), 179 (nsec) 00:07:13.986 00:07:13.986 real 0m1.167s 00:07:13.986 user 0m1.085s 00:07:13.986 sys 0m0.078s 00:07:13.986 09:25:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.986 09:25:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:13.986 ************************************ 00:07:13.986 END TEST thread_poller_perf 00:07:13.986 ************************************ 00:07:14.248 09:25:00 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:14.248 00:07:14.248 real 0m2.694s 00:07:14.248 user 0m2.357s 00:07:14.248 sys 0m0.350s 00:07:14.248 09:25:00 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.248 09:25:00 thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.248 ************************************ 00:07:14.248 END TEST thread 00:07:14.248 ************************************ 00:07:14.248 09:25:00 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:14.248 09:25:00 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:14.248 09:25:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.248 09:25:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.248 09:25:00 -- common/autotest_common.sh@10 -- # set +x 00:07:14.248 ************************************ 00:07:14.248 START TEST app_cmdline 00:07:14.248 ************************************ 00:07:14.248 09:25:00 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:14.248 * Looking for test storage... 00:07:14.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:14.248 09:25:00 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:14.248 09:25:00 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:14.248 09:25:00 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:14.510 09:25:01 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.510 09:25:01 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:14.510 09:25:01 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.510 09:25:01 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:14.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.510 --rc genhtml_branch_coverage=1 00:07:14.510 --rc genhtml_function_coverage=1 00:07:14.510 --rc genhtml_legend=1 00:07:14.510 --rc geninfo_all_blocks=1 00:07:14.510 --rc geninfo_unexecuted_blocks=1 00:07:14.510 00:07:14.510 ' 00:07:14.510 09:25:01 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:14.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.510 --rc genhtml_branch_coverage=1 00:07:14.510 --rc genhtml_function_coverage=1 00:07:14.510 --rc genhtml_legend=1 00:07:14.510 --rc geninfo_all_blocks=1 00:07:14.510 --rc geninfo_unexecuted_blocks=1 00:07:14.510 00:07:14.510 ' 00:07:14.510 09:25:01 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:14.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.510 --rc genhtml_branch_coverage=1 00:07:14.510 --rc genhtml_function_coverage=1 00:07:14.510 --rc genhtml_legend=1 00:07:14.510 --rc geninfo_all_blocks=1 00:07:14.510 --rc geninfo_unexecuted_blocks=1 00:07:14.510 00:07:14.510 ' 00:07:14.510 09:25:01 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:14.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.510 --rc genhtml_branch_coverage=1 00:07:14.510 --rc genhtml_function_coverage=1 00:07:14.510 --rc genhtml_legend=1 00:07:14.510 --rc geninfo_all_blocks=1 00:07:14.510 --rc geninfo_unexecuted_blocks=1 00:07:14.510 00:07:14.510 ' 00:07:14.510 09:25:01 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:14.510 09:25:01 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=111782 00:07:14.510 09:25:01 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 111782 00:07:14.510 09:25:01 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:14.510 09:25:01 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 111782 ']' 00:07:14.510 09:25:01 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.510 09:25:01 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.510 09:25:01 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.510 09:25:01 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.510 09:25:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:14.510 [2024-11-19 09:25:01.108197] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:14.511 [2024-11-19 09:25:01.108267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111782 ] 00:07:14.511 [2024-11-19 09:25:01.196754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.511 [2024-11-19 09:25:01.236448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.451 09:25:01 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.451 09:25:01 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:15.451 09:25:01 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:15.451 { 00:07:15.451 "version": "SPDK v25.01-pre git sha1 d47eb51c9", 00:07:15.451 "fields": { 00:07:15.451 "major": 25, 00:07:15.451 "minor": 1, 00:07:15.451 "patch": 0, 00:07:15.451 "suffix": "-pre", 00:07:15.451 "commit": "d47eb51c9" 00:07:15.451 } 00:07:15.451 } 00:07:15.451 09:25:02 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:15.451 09:25:02 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:15.451 09:25:02 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:15.451 09:25:02 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:15.451 09:25:02 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:15.451 09:25:02 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:15.451 09:25:02 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:15.451 09:25:02 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.451 09:25:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:15.451 09:25:02 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.451 09:25:02 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:15.451 09:25:02 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:15.451 09:25:02 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:15.451 09:25:02 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:15.451 09:25:02 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:15.451 09:25:02 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:15.451 09:25:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:15.451 09:25:02 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:15.451 09:25:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:15.451 09:25:02 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:15.451 09:25:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:15.451 09:25:02 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:15.451 09:25:02 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:15.451 09:25:02 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:15.713 request: 00:07:15.713 { 00:07:15.713 "method": "env_dpdk_get_mem_stats", 00:07:15.713 "req_id": 1 00:07:15.713 } 00:07:15.713 Got JSON-RPC error response 00:07:15.713 response: 00:07:15.713 { 00:07:15.713 "code": -32601, 00:07:15.713 "message": "Method not found" 00:07:15.713 } 00:07:15.713 09:25:02 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:15.713 09:25:02 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:15.713 09:25:02 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:15.713 09:25:02 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:15.713 09:25:02 app_cmdline -- app/cmdline.sh@1 -- # killprocess 111782 00:07:15.713 09:25:02 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 111782 ']' 00:07:15.713 09:25:02 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 111782 00:07:15.713 09:25:02 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:15.713 09:25:02 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.713 09:25:02 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111782 00:07:15.713 09:25:02 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.713 09:25:02 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.713 09:25:02 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111782' 00:07:15.713 killing process with pid 111782 00:07:15.713 09:25:02 app_cmdline -- common/autotest_common.sh@973 -- # kill 111782 00:07:15.713 09:25:02 app_cmdline -- common/autotest_common.sh@978 -- # wait 111782 00:07:15.974 00:07:15.974 real 0m1.726s 00:07:15.974 user 0m2.082s 00:07:15.974 sys 0m0.461s 00:07:15.974 09:25:02 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.974 09:25:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:15.974 ************************************ 00:07:15.974 END TEST app_cmdline 00:07:15.974 ************************************ 00:07:15.974 09:25:02 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:15.974 09:25:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.974 09:25:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.974 09:25:02 -- common/autotest_common.sh@10 -- # set +x 00:07:15.974 ************************************ 00:07:15.974 START TEST version 00:07:15.974 ************************************ 00:07:15.974 09:25:02 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:16.235 * Looking for test storage... 00:07:16.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:16.235 09:25:02 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:16.235 09:25:02 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:16.235 09:25:02 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:16.235 09:25:02 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:16.235 09:25:02 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.235 09:25:02 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.235 09:25:02 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.235 09:25:02 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.235 09:25:02 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.235 09:25:02 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.235 09:25:02 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.235 09:25:02 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.235 09:25:02 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.235 09:25:02 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.235 09:25:02 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.235 09:25:02 version -- scripts/common.sh@344 -- # case "$op" in 00:07:16.235 09:25:02 version -- scripts/common.sh@345 -- # : 1 00:07:16.236 09:25:02 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.236 09:25:02 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.236 09:25:02 version -- scripts/common.sh@365 -- # decimal 1 00:07:16.236 09:25:02 version -- scripts/common.sh@353 -- # local d=1 00:07:16.236 09:25:02 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.236 09:25:02 version -- scripts/common.sh@355 -- # echo 1 00:07:16.236 09:25:02 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.236 09:25:02 version -- scripts/common.sh@366 -- # decimal 2 00:07:16.236 09:25:02 version -- scripts/common.sh@353 -- # local d=2 00:07:16.236 09:25:02 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.236 09:25:02 version -- scripts/common.sh@355 -- # echo 2 00:07:16.236 09:25:02 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.236 09:25:02 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.236 09:25:02 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.236 09:25:02 version -- scripts/common.sh@368 -- # return 0 00:07:16.236 09:25:02 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.236 09:25:02 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:16.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.236 --rc genhtml_branch_coverage=1 00:07:16.236 --rc genhtml_function_coverage=1 00:07:16.236 --rc genhtml_legend=1 00:07:16.236 --rc geninfo_all_blocks=1 00:07:16.236 --rc geninfo_unexecuted_blocks=1 00:07:16.236 00:07:16.236 ' 00:07:16.236 09:25:02 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:16.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.236 --rc genhtml_branch_coverage=1 00:07:16.236 --rc genhtml_function_coverage=1 00:07:16.236 --rc genhtml_legend=1 00:07:16.236 --rc geninfo_all_blocks=1 00:07:16.236 --rc geninfo_unexecuted_blocks=1 00:07:16.236 00:07:16.236 ' 00:07:16.236 09:25:02 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:16.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.236 --rc genhtml_branch_coverage=1 00:07:16.236 --rc genhtml_function_coverage=1 00:07:16.236 --rc genhtml_legend=1 00:07:16.236 --rc geninfo_all_blocks=1 00:07:16.236 --rc geninfo_unexecuted_blocks=1 00:07:16.236 00:07:16.236 ' 00:07:16.236 09:25:02 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:16.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.236 --rc genhtml_branch_coverage=1 00:07:16.236 --rc genhtml_function_coverage=1 00:07:16.236 --rc genhtml_legend=1 00:07:16.236 --rc geninfo_all_blocks=1 00:07:16.236 --rc geninfo_unexecuted_blocks=1 00:07:16.236 00:07:16.236 ' 00:07:16.236 09:25:02 version -- app/version.sh@17 -- # get_header_version major 00:07:16.236 09:25:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:16.236 09:25:02 version -- app/version.sh@14 -- # cut -f2 00:07:16.236 09:25:02 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.236 09:25:02 version -- app/version.sh@17 -- # major=25 00:07:16.236 09:25:02 version -- app/version.sh@18 -- # get_header_version minor 00:07:16.236 09:25:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:16.236 09:25:02 version -- app/version.sh@14 -- # cut -f2 00:07:16.236 09:25:02 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.236 09:25:02 version -- app/version.sh@18 -- # minor=1 00:07:16.236 09:25:02 version -- app/version.sh@19 -- # get_header_version patch 00:07:16.236 09:25:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:16.236 09:25:02 version -- app/version.sh@14 -- # cut -f2 00:07:16.236 09:25:02 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.236 09:25:02 version -- app/version.sh@19 -- # patch=0 00:07:16.236 09:25:02 version -- app/version.sh@20 -- # get_header_version suffix 00:07:16.236 09:25:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:16.236 09:25:02 version -- app/version.sh@14 -- # cut -f2 00:07:16.236 09:25:02 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.236 09:25:02 version -- app/version.sh@20 -- # suffix=-pre 00:07:16.236 09:25:02 version -- app/version.sh@22 -- # version=25.1 00:07:16.236 09:25:02 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:16.236 09:25:02 version -- app/version.sh@28 -- # version=25.1rc0 00:07:16.236 09:25:02 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:16.236 09:25:02 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:16.236 09:25:02 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:16.236 09:25:02 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:16.236 00:07:16.236 real 0m0.276s 00:07:16.236 user 0m0.163s 00:07:16.236 sys 0m0.160s 00:07:16.236 09:25:02 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.236 09:25:02 version -- common/autotest_common.sh@10 -- # set +x 00:07:16.236 ************************************ 00:07:16.236 END TEST version 00:07:16.236 ************************************ 00:07:16.236 09:25:02 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:16.236 09:25:02 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:16.236 09:25:02 -- spdk/autotest.sh@194 -- # uname -s 00:07:16.236 09:25:02 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:16.236 09:25:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:16.236 09:25:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:16.236 09:25:02 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:16.236 09:25:02 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:16.236 09:25:02 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:16.236 09:25:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:16.236 09:25:02 -- common/autotest_common.sh@10 -- # set +x 00:07:16.497 09:25:03 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:16.497 09:25:03 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:16.497 09:25:03 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:16.497 09:25:03 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:16.497 09:25:03 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:16.497 09:25:03 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:16.497 09:25:03 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:16.497 09:25:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:16.497 09:25:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.497 09:25:03 -- common/autotest_common.sh@10 -- # set +x 00:07:16.497 ************************************ 00:07:16.497 START TEST nvmf_tcp 00:07:16.497 ************************************ 00:07:16.497 09:25:03 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:16.497 * Looking for test storage... 00:07:16.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:16.497 09:25:03 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:16.497 09:25:03 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:16.497 09:25:03 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:16.497 09:25:03 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.497 09:25:03 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:16.759 09:25:03 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:16.759 09:25:03 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.759 09:25:03 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:16.759 09:25:03 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.759 09:25:03 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.759 09:25:03 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.759 09:25:03 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:16.759 09:25:03 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.759 09:25:03 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:16.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.759 --rc genhtml_branch_coverage=1 00:07:16.759 --rc genhtml_function_coverage=1 00:07:16.759 --rc genhtml_legend=1 00:07:16.759 --rc geninfo_all_blocks=1 00:07:16.759 --rc geninfo_unexecuted_blocks=1 00:07:16.759 00:07:16.759 ' 00:07:16.760 09:25:03 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:16.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.760 --rc genhtml_branch_coverage=1 00:07:16.760 --rc genhtml_function_coverage=1 00:07:16.760 --rc genhtml_legend=1 00:07:16.760 --rc geninfo_all_blocks=1 00:07:16.760 --rc geninfo_unexecuted_blocks=1 00:07:16.760 00:07:16.760 ' 00:07:16.760 09:25:03 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:16.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.760 --rc genhtml_branch_coverage=1 00:07:16.760 --rc genhtml_function_coverage=1 00:07:16.760 --rc genhtml_legend=1 00:07:16.760 --rc geninfo_all_blocks=1 00:07:16.760 --rc geninfo_unexecuted_blocks=1 00:07:16.760 00:07:16.760 ' 00:07:16.760 09:25:03 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:16.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.760 --rc genhtml_branch_coverage=1 00:07:16.760 --rc genhtml_function_coverage=1 00:07:16.760 --rc genhtml_legend=1 00:07:16.760 --rc geninfo_all_blocks=1 00:07:16.760 --rc geninfo_unexecuted_blocks=1 00:07:16.760 00:07:16.760 ' 00:07:16.760 09:25:03 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:16.760 09:25:03 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:16.760 09:25:03 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:16.760 09:25:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:16.760 09:25:03 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.760 09:25:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:16.760 ************************************ 00:07:16.760 START TEST nvmf_target_core 00:07:16.760 ************************************ 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:16.760 * Looking for test storage... 00:07:16.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:16.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.760 --rc genhtml_branch_coverage=1 00:07:16.760 --rc genhtml_function_coverage=1 00:07:16.760 --rc genhtml_legend=1 00:07:16.760 --rc geninfo_all_blocks=1 00:07:16.760 --rc geninfo_unexecuted_blocks=1 00:07:16.760 00:07:16.760 ' 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:16.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.760 --rc genhtml_branch_coverage=1 00:07:16.760 --rc genhtml_function_coverage=1 00:07:16.760 --rc genhtml_legend=1 00:07:16.760 --rc geninfo_all_blocks=1 00:07:16.760 --rc geninfo_unexecuted_blocks=1 00:07:16.760 00:07:16.760 ' 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:16.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.760 --rc genhtml_branch_coverage=1 00:07:16.760 --rc genhtml_function_coverage=1 00:07:16.760 --rc genhtml_legend=1 00:07:16.760 --rc geninfo_all_blocks=1 00:07:16.760 --rc geninfo_unexecuted_blocks=1 00:07:16.760 00:07:16.760 ' 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:16.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.760 --rc genhtml_branch_coverage=1 00:07:16.760 --rc genhtml_function_coverage=1 00:07:16.760 --rc genhtml_legend=1 00:07:16.760 --rc geninfo_all_blocks=1 00:07:16.760 --rc geninfo_unexecuted_blocks=1 00:07:16.760 00:07:16.760 ' 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:16.760 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.022 09:25:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:17.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:17.023 ************************************ 00:07:17.023 START TEST nvmf_abort 00:07:17.023 ************************************ 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:17.023 * Looking for test storage... 00:07:17.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.023 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:17.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.287 --rc genhtml_branch_coverage=1 00:07:17.287 --rc genhtml_function_coverage=1 00:07:17.287 --rc genhtml_legend=1 00:07:17.287 --rc geninfo_all_blocks=1 00:07:17.287 --rc geninfo_unexecuted_blocks=1 00:07:17.287 00:07:17.287 ' 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:17.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.287 --rc genhtml_branch_coverage=1 00:07:17.287 --rc genhtml_function_coverage=1 00:07:17.287 --rc genhtml_legend=1 00:07:17.287 --rc geninfo_all_blocks=1 00:07:17.287 --rc geninfo_unexecuted_blocks=1 00:07:17.287 00:07:17.287 ' 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:17.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.287 --rc genhtml_branch_coverage=1 00:07:17.287 --rc genhtml_function_coverage=1 00:07:17.287 --rc genhtml_legend=1 00:07:17.287 --rc geninfo_all_blocks=1 00:07:17.287 --rc geninfo_unexecuted_blocks=1 00:07:17.287 00:07:17.287 ' 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:17.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.287 --rc genhtml_branch_coverage=1 00:07:17.287 --rc genhtml_function_coverage=1 00:07:17.287 --rc genhtml_legend=1 00:07:17.287 --rc geninfo_all_blocks=1 00:07:17.287 --rc geninfo_unexecuted_blocks=1 00:07:17.287 00:07:17.287 ' 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:17.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:17.287 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:17.288 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:17.288 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:17.288 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:17.288 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:17.288 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:17.288 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:17.288 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.288 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:17.288 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:17.288 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:17.288 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.288 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:17.288 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.288 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:17.288 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:17.288 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:17.288 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.433 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:25.434 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:25.434 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:25.434 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:25.434 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.434 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:25.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:07:25.434 00:07:25.434 --- 10.0.0.2 ping statistics --- 00:07:25.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.434 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:07:25.434 00:07:25.434 --- 10.0.0.1 ping statistics --- 00:07:25.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.434 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=116201 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 116201 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 116201 ']' 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.434 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.435 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.435 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.435 [2024-11-19 09:25:11.396470] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:25.435 [2024-11-19 09:25:11.396540] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.435 [2024-11-19 09:25:11.494759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.435 [2024-11-19 09:25:11.548567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.435 [2024-11-19 09:25:11.548610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.435 [2024-11-19 09:25:11.548619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.435 [2024-11-19 09:25:11.548626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.435 [2024-11-19 09:25:11.548632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.435 [2024-11-19 09:25:11.550624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.435 [2024-11-19 09:25:11.550783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.435 [2024-11-19 09:25:11.550784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.697 [2024-11-19 09:25:12.284485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.697 Malloc0 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.697 Delay0 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.697 [2024-11-19 09:25:12.369659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.697 09:25:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:25.959 [2024-11-19 09:25:12.521914] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:27.873 Initializing NVMe Controllers 00:07:27.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:27.873 controller IO queue size 128 less than required 00:07:27.873 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:27.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:27.873 Initialization complete. Launching workers. 00:07:27.873 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29500 00:07:27.873 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29561, failed to submit 62 00:07:27.873 success 29504, unsuccessful 57, failed 0 00:07:27.873 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:27.873 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.873 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:28.135 rmmod nvme_tcp 00:07:28.135 rmmod nvme_fabrics 00:07:28.135 rmmod nvme_keyring 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 116201 ']' 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 116201 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 116201 ']' 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 116201 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116201 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116201' 00:07:28.135 killing process with pid 116201 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 116201 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 116201 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.135 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.398 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.314 09:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:30.314 00:07:30.314 real 0m13.377s 00:07:30.314 user 0m14.205s 00:07:30.314 sys 0m6.335s 00:07:30.314 09:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.314 09:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.314 ************************************ 00:07:30.314 END TEST nvmf_abort 00:07:30.314 ************************************ 00:07:30.314 09:25:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:30.314 09:25:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.314 09:25:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.314 09:25:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.314 ************************************ 00:07:30.314 START TEST nvmf_ns_hotplug_stress 00:07:30.314 ************************************ 00:07:30.314 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:30.576 * Looking for test storage... 00:07:30.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:30.576 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.577 --rc genhtml_branch_coverage=1 00:07:30.577 --rc genhtml_function_coverage=1 00:07:30.577 --rc genhtml_legend=1 00:07:30.577 --rc geninfo_all_blocks=1 00:07:30.577 --rc geninfo_unexecuted_blocks=1 00:07:30.577 00:07:30.577 ' 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.577 --rc genhtml_branch_coverage=1 00:07:30.577 --rc genhtml_function_coverage=1 00:07:30.577 --rc genhtml_legend=1 00:07:30.577 --rc geninfo_all_blocks=1 00:07:30.577 --rc geninfo_unexecuted_blocks=1 00:07:30.577 00:07:30.577 ' 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.577 --rc genhtml_branch_coverage=1 00:07:30.577 --rc genhtml_function_coverage=1 00:07:30.577 --rc genhtml_legend=1 00:07:30.577 --rc geninfo_all_blocks=1 00:07:30.577 --rc geninfo_unexecuted_blocks=1 00:07:30.577 00:07:30.577 ' 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.577 --rc genhtml_branch_coverage=1 00:07:30.577 --rc genhtml_function_coverage=1 00:07:30.577 --rc genhtml_legend=1 00:07:30.577 --rc geninfo_all_blocks=1 00:07:30.577 --rc geninfo_unexecuted_blocks=1 00:07:30.577 00:07:30.577 ' 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:30.577 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:30.578 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.578 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.578 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.578 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:30.578 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:30.578 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:30.578 09:25:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.727 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:38.728 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:38.728 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:38.728 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:38.728 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:38.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:07:38.728 00:07:38.728 --- 10.0.0.2 ping statistics --- 00:07:38.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.728 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:07:38.728 00:07:38.728 --- 10.0.0.1 ping statistics --- 00:07:38.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.728 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=120993 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 120993 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 120993 ']' 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.728 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.729 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:38.729 [2024-11-19 09:25:24.758819] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:38.729 [2024-11-19 09:25:24.758910] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.729 [2024-11-19 09:25:24.859201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.729 [2024-11-19 09:25:24.911014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.729 [2024-11-19 09:25:24.911065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.729 [2024-11-19 09:25:24.911074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.729 [2024-11-19 09:25:24.911086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.729 [2024-11-19 09:25:24.911092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.729 [2024-11-19 09:25:24.912869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.729 [2024-11-19 09:25:24.913028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.729 [2024-11-19 09:25:24.913029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.991 09:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.991 09:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:38.991 09:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:38.991 09:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:38.991 09:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:38.991 09:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.991 09:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:38.991 09:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:39.251 [2024-11-19 09:25:25.776623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.252 09:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:39.513 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.513 [2024-11-19 09:25:26.171621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.513 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:39.775 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:40.037 Malloc0 00:07:40.037 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:40.037 Delay0 00:07:40.298 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.298 09:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:40.560 NULL1 00:07:40.560 09:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:40.822 09:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=121684 00:07:40.822 09:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:40.822 09:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:40.822 09:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.083 09:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.083 09:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:41.083 09:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:41.344 true 00:07:41.344 09:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:41.344 09:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.605 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.605 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:41.605 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:41.867 true 00:07:41.867 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:41.867 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.127 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.127 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:42.127 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:42.387 true 00:07:42.387 09:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:42.387 09:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.652 09:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.652 09:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:42.652 09:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:42.912 true 00:07:42.912 09:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:42.912 09:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.173 09:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.434 09:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:43.434 09:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:43.434 true 00:07:43.434 09:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:43.434 09:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.696 09:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.957 09:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:43.957 09:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:43.957 true 00:07:43.957 09:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:43.957 09:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.218 09:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.479 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:44.479 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:44.479 true 00:07:44.740 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:44.740 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.740 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.001 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:45.001 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:45.001 true 00:07:45.263 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:45.263 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.263 09:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.524 09:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:45.524 09:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:45.785 true 00:07:45.785 09:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:45.785 09:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.785 09:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.047 09:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:46.047 09:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:46.307 true 00:07:46.307 09:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:46.307 09:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.307 09:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.568 09:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:46.568 09:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:46.830 true 00:07:46.830 09:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:46.830 09:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.092 09:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.092 09:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:47.092 09:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:47.354 true 00:07:47.354 09:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:47.354 09:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.616 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.616 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:47.616 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:47.877 true 00:07:47.877 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:47.877 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.138 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.401 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:48.401 09:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:48.401 true 00:07:48.401 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:48.401 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.663 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.924 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:48.924 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:48.924 true 00:07:49.186 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:49.186 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.186 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.446 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:49.446 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:49.707 true 00:07:49.707 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:49.708 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.708 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.968 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:49.968 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:50.229 true 00:07:50.229 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:50.229 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.490 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.490 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:50.490 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:50.750 true 00:07:50.750 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:50.750 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.011 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.271 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:51.271 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:51.271 true 00:07:51.271 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:51.271 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.532 09:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.792 09:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:51.792 09:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:51.792 true 00:07:51.792 09:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:51.792 09:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.053 09:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.314 09:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:52.314 09:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:52.314 true 00:07:52.573 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:52.573 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.573 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.833 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:52.833 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:53.094 true 00:07:53.094 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:53.094 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.355 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.355 09:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:53.355 09:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:53.617 true 00:07:53.617 09:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:53.617 09:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.878 09:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.878 09:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:53.878 09:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:54.139 true 00:07:54.139 09:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:54.139 09:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.400 09:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.662 09:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:54.662 09:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:54.662 true 00:07:54.662 09:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:54.662 09:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.923 09:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.184 09:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:55.184 09:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:55.184 true 00:07:55.184 09:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:55.184 09:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.445 09:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.717 09:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:55.717 09:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:55.717 true 00:07:55.718 09:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:55.718 09:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.984 09:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.245 09:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:56.245 09:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:56.245 true 00:07:56.507 09:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:56.507 09:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.507 09:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.769 09:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:56.769 09:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:57.030 true 00:07:57.030 09:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:57.030 09:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.030 09:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.291 09:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:57.291 09:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:57.552 true 00:07:57.552 09:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:57.552 09:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.813 09:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.813 09:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:57.813 09:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:58.075 true 00:07:58.075 09:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:58.075 09:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.338 09:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.338 09:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:58.338 09:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:58.600 true 00:07:58.600 09:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:58.600 09:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.862 09:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.123 09:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:59.123 09:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:59.123 true 00:07:59.123 09:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:59.123 09:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.384 09:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.645 09:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:59.645 09:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:59.645 true 00:07:59.645 09:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:07:59.645 09:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.906 09:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.167 09:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:08:00.167 09:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:08:00.428 true 00:08:00.428 09:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:00.428 09:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.428 09:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.690 09:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:08:00.690 09:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:08:00.951 true 00:08:00.951 09:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:00.951 09:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.951 09:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.213 09:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:08:01.213 09:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:08:01.474 true 00:08:01.474 09:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:01.474 09:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.735 09:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.735 09:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:08:01.735 09:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:08:01.996 true 00:08:01.996 09:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:01.996 09:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.257 09:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.518 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:08:02.518 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:08:02.518 true 00:08:02.518 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:02.518 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.779 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.039 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:08:03.039 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:08:03.039 true 00:08:03.039 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:03.039 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.299 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.560 09:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:08:03.560 09:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:08:03.560 true 00:08:03.821 09:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:03.821 09:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.821 09:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.082 09:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:08:04.082 09:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:08:04.343 true 00:08:04.343 09:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:04.343 09:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.343 09:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.603 09:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:08:04.603 09:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:08:04.864 true 00:08:04.864 09:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:04.864 09:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.126 09:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.126 09:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:08:05.126 09:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:08:05.387 true 00:08:05.387 09:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:05.387 09:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.647 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.647 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:08:05.647 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:08:05.908 true 00:08:05.908 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:05.908 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.169 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.430 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:08:06.430 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:08:06.430 true 00:08:06.430 09:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:06.430 09:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.690 09:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.951 09:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:08:06.951 09:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:08:06.951 true 00:08:06.951 09:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:06.951 09:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.212 09:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.473 09:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:08:07.473 09:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:08:07.473 true 00:08:07.734 09:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:07.734 09:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.734 09:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.995 09:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:08:07.995 09:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:08:08.256 true 00:08:08.256 09:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:08.256 09:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.256 09:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.518 09:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:08:08.518 09:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:08:08.779 true 00:08:08.779 09:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:08.779 09:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.040 09:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.040 09:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:08:09.040 09:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:08:09.301 true 00:08:09.301 09:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:09.301 09:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.562 09:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.562 09:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:08:09.562 09:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:08:09.823 true 00:08:09.823 09:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:09.823 09:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.084 09:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.345 09:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:08:10.345 09:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:08:10.345 true 00:08:10.345 09:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:10.345 09:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.608 09:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.870 09:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:08:10.870 09:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:08:10.870 true 00:08:10.870 09:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:10.870 09:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.132 Initializing NVMe Controllers 00:08:11.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:11.132 Controller IO queue size 128, less than required. 00:08:11.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:11.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:11.132 Initialization complete. Launching workers. 00:08:11.132 ======================================================== 00:08:11.132 Latency(us) 00:08:11.132 Device Information : IOPS MiB/s Average min max 00:08:11.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31124.30 15.20 4112.59 1656.20 43966.21 00:08:11.132 ======================================================== 00:08:11.132 Total : 31124.30 15.20 4112.59 1656.20 43966.21 00:08:11.132 00:08:11.132 09:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.393 09:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:08:11.393 09:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:08:11.393 true 00:08:11.653 09:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 121684 00:08:11.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (121684) - No such process 00:08:11.653 09:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 121684 00:08:11.653 09:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.653 09:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.914 09:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:11.914 09:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:11.914 09:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:11.914 09:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:11.914 09:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:12.173 null0 00:08:12.173 09:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.173 09:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.173 09:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:12.173 null1 00:08:12.173 09:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.173 09:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.173 09:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:12.433 null2 00:08:12.433 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.433 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.433 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:12.693 null3 00:08:12.693 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.693 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.693 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:12.693 null4 00:08:12.693 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.693 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.693 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:12.953 null5 00:08:12.953 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:12.953 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:12.953 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:13.213 null6 00:08:13.213 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.213 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.213 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:13.213 null7 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 128237 128238 128240 128242 128244 128246 128248 128250 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.474 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.474 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.474 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.474 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.474 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.474 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.474 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.746 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.008 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.268 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.268 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.268 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.268 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.268 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.268 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.268 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.268 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.268 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.268 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.268 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.528 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.528 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.529 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.790 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.791 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.051 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.051 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.051 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.051 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.051 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.051 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.051 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.051 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.051 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.051 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.312 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.312 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.313 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.313 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.313 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.313 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.313 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.574 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.835 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.835 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.835 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.835 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.835 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.835 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.835 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.835 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.835 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.835 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.835 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.835 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.835 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.835 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.835 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.096 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.096 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.096 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.097 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.357 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.357 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.357 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.357 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.357 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.357 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.357 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.357 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.357 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.357 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.357 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.357 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.357 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.357 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.357 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.357 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.357 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.357 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.357 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.357 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.357 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.357 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.357 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.623 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.624 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.624 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.624 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.885 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:17.146 rmmod nvme_tcp 00:08:17.146 rmmod nvme_fabrics 00:08:17.146 rmmod nvme_keyring 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 120993 ']' 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 120993 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 120993 ']' 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 120993 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.146 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120993 00:08:17.408 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:17.408 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:17.408 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120993' 00:08:17.408 killing process with pid 120993 00:08:17.408 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 120993 00:08:17.408 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 120993 00:08:17.408 09:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:17.408 09:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:17.408 09:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:17.408 09:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:17.408 09:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:17.408 09:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:17.408 09:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:17.408 09:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:17.408 09:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:17.408 09:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.408 09:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.408 09:26:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:19.960 00:08:19.960 real 0m49.102s 00:08:19.960 user 3m20.804s 00:08:19.960 sys 0m17.092s 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:19.960 ************************************ 00:08:19.960 END TEST nvmf_ns_hotplug_stress 00:08:19.960 ************************************ 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.960 ************************************ 00:08:19.960 START TEST nvmf_delete_subsystem 00:08:19.960 ************************************ 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:19.960 * Looking for test storage... 00:08:19.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:19.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.960 --rc genhtml_branch_coverage=1 00:08:19.960 --rc genhtml_function_coverage=1 00:08:19.960 --rc genhtml_legend=1 00:08:19.960 --rc geninfo_all_blocks=1 00:08:19.960 --rc geninfo_unexecuted_blocks=1 00:08:19.960 00:08:19.960 ' 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:19.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.960 --rc genhtml_branch_coverage=1 00:08:19.960 --rc genhtml_function_coverage=1 00:08:19.960 --rc genhtml_legend=1 00:08:19.960 --rc geninfo_all_blocks=1 00:08:19.960 --rc geninfo_unexecuted_blocks=1 00:08:19.960 00:08:19.960 ' 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:19.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.960 --rc genhtml_branch_coverage=1 00:08:19.960 --rc genhtml_function_coverage=1 00:08:19.960 --rc genhtml_legend=1 00:08:19.960 --rc geninfo_all_blocks=1 00:08:19.960 --rc geninfo_unexecuted_blocks=1 00:08:19.960 00:08:19.960 ' 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:19.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.960 --rc genhtml_branch_coverage=1 00:08:19.960 --rc genhtml_function_coverage=1 00:08:19.960 --rc genhtml_legend=1 00:08:19.960 --rc geninfo_all_blocks=1 00:08:19.960 --rc geninfo_unexecuted_blocks=1 00:08:19.960 00:08:19.960 ' 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:19.960 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:19.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:19.961 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:28.109 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:28.109 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:28.109 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:28.109 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.109 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:28.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:08:28.110 00:08:28.110 --- 10.0.0.2 ping statistics --- 00:08:28.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.110 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:08:28.110 00:08:28.110 --- 10.0.0.1 ping statistics --- 00:08:28.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.110 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=133424 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 133424 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 133424 ']' 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.110 09:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.110 [2024-11-19 09:26:13.942967] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:28.110 [2024-11-19 09:26:13.943057] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.110 [2024-11-19 09:26:14.042636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:28.110 [2024-11-19 09:26:14.095220] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.110 [2024-11-19 09:26:14.095270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.110 [2024-11-19 09:26:14.095279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.110 [2024-11-19 09:26:14.095286] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.110 [2024-11-19 09:26:14.095292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.110 [2024-11-19 09:26:14.099192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.110 [2024-11-19 09:26:14.099363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.110 [2024-11-19 09:26:14.805475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.110 [2024-11-19 09:26:14.829745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.110 NULL1 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.110 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.372 Delay0 00:08:28.372 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.372 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.372 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.372 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.372 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.372 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=133768 00:08:28.372 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:28.372 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:28.372 [2024-11-19 09:26:14.956975] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:30.291 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.291 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.291 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.552 Write completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Write completed with error (sct=0, sc=8) 00:08:30.552 starting I/O failed: -6 00:08:30.552 Write completed with error (sct=0, sc=8) 00:08:30.552 Write completed with error (sct=0, sc=8) 00:08:30.552 Write completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 starting I/O failed: -6 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 starting I/O failed: -6 00:08:30.552 Write completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 starting I/O failed: -6 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 starting I/O failed: -6 00:08:30.552 Write completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 starting I/O failed: -6 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 starting I/O failed: -6 00:08:30.552 Write completed with error (sct=0, sc=8) 00:08:30.552 Write completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 starting I/O failed: -6 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 starting I/O failed: -6 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 starting I/O failed: -6 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Write completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 starting I/O failed: -6 00:08:30.552 Write completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 [2024-11-19 09:26:17.094431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16052c0 is same with the state(6) to be set 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Write completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Write completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Write completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Write completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.552 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 starting I/O failed: -6 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 starting I/O failed: -6 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 starting I/O failed: -6 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 starting I/O failed: -6 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 starting I/O failed: -6 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 starting I/O failed: -6 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 starting I/O failed: -6 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 starting I/O failed: -6 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 starting I/O failed: -6 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 starting I/O failed: -6 00:08:30.553 [2024-11-19 09:26:17.096095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f317c000c40 is same with the state(6) to be set 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Write completed with error (sct=0, sc=8) 00:08:30.553 Read completed with error (sct=0, sc=8) 00:08:31.498 [2024-11-19 09:26:18.060724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16069a0 is same with the state(6) to be set 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 [2024-11-19 09:26:18.096433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f317c00d020 is same with the state(6) to be set 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 [2024-11-19 09:26:18.096923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f317c00d7c0 is same with the state(6) to be set 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Read completed with error (sct=0, sc=8) 00:08:31.498 Write completed with error (sct=0, sc=8) 00:08:31.499 Write completed with error (sct=0, sc=8) 00:08:31.499 Write completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Write completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Write completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Write completed with error (sct=0, sc=8) 00:08:31.499 [2024-11-19 09:26:18.097692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16054a0 is same with the state(6) to be set 00:08:31.499 Write completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Write completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Write completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Write completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Write completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Write completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Write completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Write completed with error (sct=0, sc=8) 00:08:31.499 Read completed with error (sct=0, sc=8) 00:08:31.499 Write completed with error (sct=0, sc=8) 00:08:31.499 [2024-11-19 09:26:18.098229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1605860 is same with the state(6) to be set 00:08:31.499 Initializing NVMe Controllers 00:08:31.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:31.499 Controller IO queue size 128, less than required. 00:08:31.499 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:31.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:31.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:31.499 Initialization complete. Launching workers. 00:08:31.499 ======================================================== 00:08:31.499 Latency(us) 00:08:31.499 Device Information : IOPS MiB/s Average min max 00:08:31.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.50 0.08 891419.83 348.69 1009890.00 00:08:31.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.62 0.07 998454.85 287.26 2001054.41 00:08:31.499 ======================================================== 00:08:31.499 Total : 322.13 0.16 941468.61 287.26 2001054.41 00:08:31.499 00:08:31.499 [2024-11-19 09:26:18.098644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16069a0 (9): Bad file descriptor 00:08:31.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:31.499 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.499 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:31.499 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 133768 00:08:31.499 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 133768 00:08:32.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (133768) - No such process 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 133768 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 133768 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 133768 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:32.073 [2024-11-19 09:26:18.630741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=134456 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 134456 00:08:32.073 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.073 [2024-11-19 09:26:18.738149] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:32.658 09:26:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.658 09:26:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 134456 00:08:32.658 09:26:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.920 09:26:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.920 09:26:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 134456 00:08:32.920 09:26:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:33.492 09:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:33.492 09:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 134456 00:08:33.492 09:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:34.064 09:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:34.064 09:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 134456 00:08:34.064 09:26:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:34.637 09:26:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:34.637 09:26:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 134456 00:08:34.637 09:26:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:35.209 09:26:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:35.209 09:26:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 134456 00:08:35.209 09:26:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:35.209 Initializing NVMe Controllers 00:08:35.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:35.209 Controller IO queue size 128, less than required. 00:08:35.209 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:35.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:35.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:35.210 Initialization complete. Launching workers. 00:08:35.210 ======================================================== 00:08:35.210 Latency(us) 00:08:35.210 Device Information : IOPS MiB/s Average min max 00:08:35.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003283.43 1000184.09 1043915.96 00:08:35.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002938.76 1000185.29 1008069.32 00:08:35.210 ======================================================== 00:08:35.210 Total : 256.00 0.12 1003111.09 1000184.09 1043915.96 00:08:35.210 00:08:35.470 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:35.470 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 134456 00:08:35.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (134456) - No such process 00:08:35.470 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 134456 00:08:35.471 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:35.471 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:35.471 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:35.471 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:35.471 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.471 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:35.471 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.471 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.471 rmmod nvme_tcp 00:08:35.471 rmmod nvme_fabrics 00:08:35.731 rmmod nvme_keyring 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 133424 ']' 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 133424 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 133424 ']' 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 133424 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 133424 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 133424' 00:08:35.731 killing process with pid 133424 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 133424 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 133424 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.731 09:26:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:38.346 00:08:38.346 real 0m18.281s 00:08:38.346 user 0m30.916s 00:08:38.346 sys 0m6.626s 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.346 ************************************ 00:08:38.346 END TEST nvmf_delete_subsystem 00:08:38.346 ************************************ 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.346 ************************************ 00:08:38.346 START TEST nvmf_host_management 00:08:38.346 ************************************ 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:38.346 * Looking for test storage... 00:08:38.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.346 --rc genhtml_branch_coverage=1 00:08:38.346 --rc genhtml_function_coverage=1 00:08:38.346 --rc genhtml_legend=1 00:08:38.346 --rc geninfo_all_blocks=1 00:08:38.346 --rc geninfo_unexecuted_blocks=1 00:08:38.346 00:08:38.346 ' 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.346 --rc genhtml_branch_coverage=1 00:08:38.346 --rc genhtml_function_coverage=1 00:08:38.346 --rc genhtml_legend=1 00:08:38.346 --rc geninfo_all_blocks=1 00:08:38.346 --rc geninfo_unexecuted_blocks=1 00:08:38.346 00:08:38.346 ' 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.346 --rc genhtml_branch_coverage=1 00:08:38.346 --rc genhtml_function_coverage=1 00:08:38.346 --rc genhtml_legend=1 00:08:38.346 --rc geninfo_all_blocks=1 00:08:38.346 --rc geninfo_unexecuted_blocks=1 00:08:38.346 00:08:38.346 ' 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.346 --rc genhtml_branch_coverage=1 00:08:38.346 --rc genhtml_function_coverage=1 00:08:38.346 --rc genhtml_legend=1 00:08:38.346 --rc geninfo_all_blocks=1 00:08:38.346 --rc geninfo_unexecuted_blocks=1 00:08:38.346 00:08:38.346 ' 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:38.346 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:38.347 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.347 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.347 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.347 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:38.347 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:38.347 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:38.347 09:26:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:46.491 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:46.491 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:46.491 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:46.491 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:46.492 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.492 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:46.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:08:46.492 00:08:46.492 --- 10.0.0.2 ping statistics --- 00:08:46.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.492 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:08:46.492 00:08:46.492 --- 10.0.0.1 ping statistics --- 00:08:46.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.492 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=139472 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 139472 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 139472 ']' 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.492 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.492 [2024-11-19 09:26:32.276206] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:46.492 [2024-11-19 09:26:32.276271] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.492 [2024-11-19 09:26:32.375515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.492 [2024-11-19 09:26:32.428379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.492 [2024-11-19 09:26:32.428430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.492 [2024-11-19 09:26:32.428440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.492 [2024-11-19 09:26:32.428451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.492 [2024-11-19 09:26:32.428460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.492 [2024-11-19 09:26:32.430500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.492 [2024-11-19 09:26:32.430661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.492 [2024-11-19 09:26:32.430815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.492 [2024-11-19 09:26:32.430815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.492 [2024-11-19 09:26:33.147034] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.492 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.492 Malloc0 00:08:46.492 [2024-11-19 09:26:33.234294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=139696 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 139696 /var/tmp/bdevperf.sock 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 139696 ']' 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:46.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:46.755 { 00:08:46.755 "params": { 00:08:46.755 "name": "Nvme$subsystem", 00:08:46.755 "trtype": "$TEST_TRANSPORT", 00:08:46.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:46.755 "adrfam": "ipv4", 00:08:46.755 "trsvcid": "$NVMF_PORT", 00:08:46.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:46.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:46.755 "hdgst": ${hdgst:-false}, 00:08:46.755 "ddgst": ${ddgst:-false} 00:08:46.755 }, 00:08:46.755 "method": "bdev_nvme_attach_controller" 00:08:46.755 } 00:08:46.755 EOF 00:08:46.755 )") 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:46.755 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:46.755 "params": { 00:08:46.755 "name": "Nvme0", 00:08:46.755 "trtype": "tcp", 00:08:46.755 "traddr": "10.0.0.2", 00:08:46.755 "adrfam": "ipv4", 00:08:46.755 "trsvcid": "4420", 00:08:46.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:46.755 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:46.755 "hdgst": false, 00:08:46.755 "ddgst": false 00:08:46.755 }, 00:08:46.755 "method": "bdev_nvme_attach_controller" 00:08:46.755 }' 00:08:46.755 [2024-11-19 09:26:33.344447] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:46.755 [2024-11-19 09:26:33.344520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139696 ] 00:08:46.755 [2024-11-19 09:26:33.438401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.755 [2024-11-19 09:26:33.492525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.017 Running I/O for 10 seconds... 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.590 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.590 [2024-11-19 09:26:34.262154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b22130 is same with the state(6) to be set 00:08:47.590 [2024-11-19 09:26:34.262274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b22130 is same with the state(6) to be set 00:08:47.590 [2024-11-19 09:26:34.262609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.590 [2024-11-19 09:26:34.262664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.590 [2024-11-19 09:26:34.262685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.590 [2024-11-19 09:26:34.262694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.590 [2024-11-19 09:26:34.262705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.590 [2024-11-19 09:26:34.262713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.590 [2024-11-19 09:26:34.262723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.590 [2024-11-19 09:26:34.262730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.590 [2024-11-19 09:26:34.262740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.590 [2024-11-19 09:26:34.262748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.590 [2024-11-19 09:26:34.262758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.262765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.262775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.262783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.262792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.262800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.262819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.262827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.262837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.262844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.262854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.262862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.262872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.262879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.262888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.262896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.262905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.262913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.262922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.262930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.262939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.262947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.262956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.262964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.262974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.262982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.262992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.262999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.591 [2024-11-19 09:26:34.263455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.591 [2024-11-19 09:26:34.263462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.592 [2024-11-19 09:26:34.263794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2190 is same with the state(6) to be set 00:08:47.592 [2024-11-19 09:26:34.263925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:47.592 [2024-11-19 09:26:34.263941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:47.592 [2024-11-19 09:26:34.263967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:47.592 [2024-11-19 09:26:34.263988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.263998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:47.592 [2024-11-19 09:26:34.264005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.592 [2024-11-19 09:26:34.264012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89000 is same with the state(6) to be set 00:08:47.592 [2024-11-19 09:26:34.265234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:47.592 task offset: 96128 on job bdev=Nvme0n1 fails 00:08:47.592 00:08:47.592 Latency(us) 00:08:47.592 [2024-11-19T08:26:34.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.592 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:47.592 Job: Nvme0n1 ended in about 0.56 seconds with error 00:08:47.592 Verification LBA range: start 0x0 length 0x400 00:08:47.592 Nvme0n1 : 0.56 1253.69 78.36 113.97 0.00 45697.71 2157.23 37355.52 00:08:47.592 [2024-11-19T08:26:34.340Z] =================================================================================================================== 00:08:47.592 [2024-11-19T08:26:34.340Z] Total : 1253.69 78.36 113.97 0.00 45697.71 2157.23 37355.52 00:08:47.592 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.592 [2024-11-19 09:26:34.267505] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:47.592 [2024-11-19 09:26:34.267542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c89000 (9): Bad file descriptor 00:08:47.592 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:47.592 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.592 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.592 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.592 09:26:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:47.853 [2024-11-19 09:26:34.412552] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:48.797 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 139696 00:08:48.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (139696) - No such process 00:08:48.797 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:48.797 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:48.797 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:48.797 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:48.797 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:48.797 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:48.797 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:48.797 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:48.797 { 00:08:48.797 "params": { 00:08:48.797 "name": "Nvme$subsystem", 00:08:48.797 "trtype": "$TEST_TRANSPORT", 00:08:48.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:48.797 "adrfam": "ipv4", 00:08:48.797 "trsvcid": "$NVMF_PORT", 00:08:48.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:48.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:48.797 "hdgst": ${hdgst:-false}, 00:08:48.797 "ddgst": ${ddgst:-false} 00:08:48.797 }, 00:08:48.797 "method": "bdev_nvme_attach_controller" 00:08:48.797 } 00:08:48.797 EOF 00:08:48.797 )") 00:08:48.797 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:48.797 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:48.797 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:48.797 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:48.797 "params": { 00:08:48.797 "name": "Nvme0", 00:08:48.797 "trtype": "tcp", 00:08:48.797 "traddr": "10.0.0.2", 00:08:48.797 "adrfam": "ipv4", 00:08:48.797 "trsvcid": "4420", 00:08:48.797 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:48.797 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:48.797 "hdgst": false, 00:08:48.797 "ddgst": false 00:08:48.797 }, 00:08:48.797 "method": "bdev_nvme_attach_controller" 00:08:48.797 }' 00:08:48.797 [2024-11-19 09:26:35.337680] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:48.797 [2024-11-19 09:26:35.337734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140197 ] 00:08:48.797 [2024-11-19 09:26:35.424694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.797 [2024-11-19 09:26:35.458388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.058 Running I/O for 1 seconds... 00:08:50.001 1344.00 IOPS, 84.00 MiB/s 00:08:50.001 Latency(us) 00:08:50.001 [2024-11-19T08:26:36.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.001 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:50.001 Verification LBA range: start 0x0 length 0x400 00:08:50.001 Nvme0n1 : 1.01 1399.29 87.46 0.00 0.00 44985.77 10212.69 34515.63 00:08:50.001 [2024-11-19T08:26:36.749Z] =================================================================================================================== 00:08:50.001 [2024-11-19T08:26:36.749Z] Total : 1399.29 87.46 0.00 0.00 44985.77 10212.69 34515.63 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.262 rmmod nvme_tcp 00:08:50.262 rmmod nvme_fabrics 00:08:50.262 rmmod nvme_keyring 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 139472 ']' 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 139472 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 139472 ']' 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 139472 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 139472 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 139472' 00:08:50.262 killing process with pid 139472 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 139472 00:08:50.262 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 139472 00:08:50.524 [2024-11-19 09:26:37.085242] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:50.524 09:26:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:50.524 09:26:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:50.524 09:26:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:50.524 09:26:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:50.524 09:26:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:50.524 09:26:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:50.524 09:26:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:50.524 09:26:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:50.524 09:26:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:50.524 09:26:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.524 09:26:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.524 09:26:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:53.076 00:08:53.076 real 0m14.619s 00:08:53.076 user 0m23.647s 00:08:53.076 sys 0m6.639s 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.076 ************************************ 00:08:53.076 END TEST nvmf_host_management 00:08:53.076 ************************************ 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.076 ************************************ 00:08:53.076 START TEST nvmf_lvol 00:08:53.076 ************************************ 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:53.076 * Looking for test storage... 00:08:53.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:53.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.076 --rc genhtml_branch_coverage=1 00:08:53.076 --rc genhtml_function_coverage=1 00:08:53.076 --rc genhtml_legend=1 00:08:53.076 --rc geninfo_all_blocks=1 00:08:53.076 --rc geninfo_unexecuted_blocks=1 00:08:53.076 00:08:53.076 ' 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:53.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.076 --rc genhtml_branch_coverage=1 00:08:53.076 --rc genhtml_function_coverage=1 00:08:53.076 --rc genhtml_legend=1 00:08:53.076 --rc geninfo_all_blocks=1 00:08:53.076 --rc geninfo_unexecuted_blocks=1 00:08:53.076 00:08:53.076 ' 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:53.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.076 --rc genhtml_branch_coverage=1 00:08:53.076 --rc genhtml_function_coverage=1 00:08:53.076 --rc genhtml_legend=1 00:08:53.076 --rc geninfo_all_blocks=1 00:08:53.076 --rc geninfo_unexecuted_blocks=1 00:08:53.076 00:08:53.076 ' 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:53.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.076 --rc genhtml_branch_coverage=1 00:08:53.076 --rc genhtml_function_coverage=1 00:08:53.076 --rc genhtml_legend=1 00:08:53.076 --rc geninfo_all_blocks=1 00:08:53.076 --rc geninfo_unexecuted_blocks=1 00:08:53.076 00:08:53.076 ' 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.076 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:53.077 09:26:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:01.226 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:01.226 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:01.226 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:01.227 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:01.227 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:01.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:09:01.227 00:09:01.227 --- 10.0.0.2 ping statistics --- 00:09:01.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.227 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:09:01.227 00:09:01.227 --- 10.0.0.1 ping statistics --- 00:09:01.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.227 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=144619 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 144619 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 144619 ']' 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.227 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:01.227 [2024-11-19 09:26:46.998814] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:01.227 [2024-11-19 09:26:46.998876] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.227 [2024-11-19 09:26:47.108008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:01.227 [2024-11-19 09:26:47.175393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.227 [2024-11-19 09:26:47.175455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.227 [2024-11-19 09:26:47.175468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.227 [2024-11-19 09:26:47.175479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.227 [2024-11-19 09:26:47.175488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.227 [2024-11-19 09:26:47.177805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.227 [2024-11-19 09:26:47.177966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.227 [2024-11-19 09:26:47.177968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.227 09:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.227 09:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:01.227 09:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:01.227 09:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:01.227 09:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:01.227 09:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.227 09:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:01.489 [2024-11-19 09:26:48.038650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.489 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:01.751 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:01.751 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.014 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:02.014 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:02.014 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:02.276 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=82428971-998b-45cf-846b-21c47650bcdb 00:09:02.276 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 82428971-998b-45cf-846b-21c47650bcdb lvol 20 00:09:02.550 09:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c7928b50-ae8a-4b07-bac4-218895172845 00:09:02.550 09:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:02.823 09:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c7928b50-ae8a-4b07-bac4-218895172845 00:09:02.823 09:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:03.084 [2024-11-19 09:26:49.693741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.084 09:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:03.346 09:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=145268 00:09:03.346 09:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:03.346 09:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:04.291 09:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c7928b50-ae8a-4b07-bac4-218895172845 MY_SNAPSHOT 00:09:04.552 09:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=094564c9-9d52-445e-9f4a-8a6021bfdfea 00:09:04.552 09:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c7928b50-ae8a-4b07-bac4-218895172845 30 00:09:04.813 09:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 094564c9-9d52-445e-9f4a-8a6021bfdfea MY_CLONE 00:09:04.813 09:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bb0c9395-f1e7-4f91-94fb-eb0bbea3d61b 00:09:04.813 09:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bb0c9395-f1e7-4f91-94fb-eb0bbea3d61b 00:09:05.385 09:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 145268 00:09:13.526 Initializing NVMe Controllers 00:09:13.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:13.526 Controller IO queue size 128, less than required. 00:09:13.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:13.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:13.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:13.526 Initialization complete. Launching workers. 00:09:13.526 ======================================================== 00:09:13.526 Latency(us) 00:09:13.526 Device Information : IOPS MiB/s Average min max 00:09:13.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16403.53 64.08 7806.10 1871.41 63064.62 00:09:13.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17483.39 68.29 7321.71 568.93 49513.15 00:09:13.526 ======================================================== 00:09:13.526 Total : 33886.92 132.37 7556.19 568.93 63064.62 00:09:13.526 00:09:13.526 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:13.787 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c7928b50-ae8a-4b07-bac4-218895172845 00:09:14.048 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 82428971-998b-45cf-846b-21c47650bcdb 00:09:14.048 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.309 rmmod nvme_tcp 00:09:14.309 rmmod nvme_fabrics 00:09:14.309 rmmod nvme_keyring 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 144619 ']' 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 144619 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 144619 ']' 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 144619 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 144619 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 144619' 00:09:14.309 killing process with pid 144619 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 144619 00:09:14.309 09:27:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 144619 00:09:14.571 09:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:14.571 09:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:14.571 09:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:14.571 09:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:14.571 09:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:14.571 09:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:14.571 09:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:14.571 09:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.571 09:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:14.571 09:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.571 09:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.571 09:27:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.492 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:16.492 00:09:16.492 real 0m23.867s 00:09:16.492 user 1m5.017s 00:09:16.492 sys 0m8.437s 00:09:16.492 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.492 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:16.492 ************************************ 00:09:16.492 END TEST nvmf_lvol 00:09:16.492 ************************************ 00:09:16.492 09:27:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:16.492 09:27:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.492 09:27:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.492 09:27:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.492 ************************************ 00:09:16.492 START TEST nvmf_lvs_grow 00:09:16.492 ************************************ 00:09:16.492 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:16.754 * Looking for test storage... 00:09:16.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.754 --rc genhtml_branch_coverage=1 00:09:16.754 --rc genhtml_function_coverage=1 00:09:16.754 --rc genhtml_legend=1 00:09:16.754 --rc geninfo_all_blocks=1 00:09:16.754 --rc geninfo_unexecuted_blocks=1 00:09:16.754 00:09:16.754 ' 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.754 --rc genhtml_branch_coverage=1 00:09:16.754 --rc genhtml_function_coverage=1 00:09:16.754 --rc genhtml_legend=1 00:09:16.754 --rc geninfo_all_blocks=1 00:09:16.754 --rc geninfo_unexecuted_blocks=1 00:09:16.754 00:09:16.754 ' 00:09:16.754 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.754 --rc genhtml_branch_coverage=1 00:09:16.754 --rc genhtml_function_coverage=1 00:09:16.754 --rc genhtml_legend=1 00:09:16.754 --rc geninfo_all_blocks=1 00:09:16.755 --rc geninfo_unexecuted_blocks=1 00:09:16.755 00:09:16.755 ' 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.755 --rc genhtml_branch_coverage=1 00:09:16.755 --rc genhtml_function_coverage=1 00:09:16.755 --rc genhtml_legend=1 00:09:16.755 --rc geninfo_all_blocks=1 00:09:16.755 --rc geninfo_unexecuted_blocks=1 00:09:16.755 00:09:16.755 ' 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.755 09:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:24.906 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.906 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:24.907 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:24.907 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:24.907 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:24.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:09:24.907 00:09:24.907 --- 10.0.0.2 ping statistics --- 00:09:24.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.907 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:09:24.907 00:09:24.907 --- 10.0.0.1 ping statistics --- 00:09:24.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.907 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=152121 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 152121 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 152121 ']' 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.907 09:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.907 [2024-11-19 09:27:11.037426] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:24.907 [2024-11-19 09:27:11.037525] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.907 [2024-11-19 09:27:11.136258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.907 [2024-11-19 09:27:11.187499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.907 [2024-11-19 09:27:11.187549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.907 [2024-11-19 09:27:11.187558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.907 [2024-11-19 09:27:11.187565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.907 [2024-11-19 09:27:11.187571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.907 [2024-11-19 09:27:11.188313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.169 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.169 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:25.169 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:25.169 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:25.169 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:25.169 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.169 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:25.431 [2024-11-19 09:27:12.039013] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.431 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:25.431 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.431 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.431 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:25.431 ************************************ 00:09:25.431 START TEST lvs_grow_clean 00:09:25.431 ************************************ 00:09:25.431 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:25.431 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:25.431 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:25.431 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:25.431 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:25.431 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:25.431 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:25.431 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.431 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.431 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:25.693 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:25.693 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:25.954 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c90b6a75-f4d5-402f-8ef8-a3e80010fcea 00:09:25.954 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c90b6a75-f4d5-402f-8ef8-a3e80010fcea 00:09:25.954 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:25.954 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:25.954 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:25.954 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c90b6a75-f4d5-402f-8ef8-a3e80010fcea lvol 150 00:09:26.215 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d7c3dbc9-8da4-434a-aa5a-975fe67c8db1 00:09:26.216 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:26.216 09:27:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:26.478 [2024-11-19 09:27:13.013845] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:26.478 [2024-11-19 09:27:13.013922] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:26.478 true 00:09:26.478 09:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c90b6a75-f4d5-402f-8ef8-a3e80010fcea 00:09:26.478 09:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:26.739 09:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:26.739 09:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:26.739 09:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d7c3dbc9-8da4-434a-aa5a-975fe67c8db1 00:09:27.001 09:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:27.263 [2024-11-19 09:27:13.768237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.263 09:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:27.263 09:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=152925 00:09:27.263 09:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:27.263 09:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:27.263 09:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 152925 /var/tmp/bdevperf.sock 00:09:27.263 09:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 152925 ']' 00:09:27.263 09:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:27.263 09:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.263 09:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:27.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:27.263 09:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.263 09:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:27.525 [2024-11-19 09:27:14.021062] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:27.525 [2024-11-19 09:27:14.021132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152925 ] 00:09:27.525 [2024-11-19 09:27:14.115050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.525 [2024-11-19 09:27:14.167721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.099 09:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.099 09:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:28.099 09:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:28.673 Nvme0n1 00:09:28.673 09:27:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:28.935 [ 00:09:28.935 { 00:09:28.935 "name": "Nvme0n1", 00:09:28.935 "aliases": [ 00:09:28.935 "d7c3dbc9-8da4-434a-aa5a-975fe67c8db1" 00:09:28.935 ], 00:09:28.935 "product_name": "NVMe disk", 00:09:28.935 "block_size": 4096, 00:09:28.935 "num_blocks": 38912, 00:09:28.935 "uuid": "d7c3dbc9-8da4-434a-aa5a-975fe67c8db1", 00:09:28.935 "numa_id": 0, 00:09:28.935 "assigned_rate_limits": { 00:09:28.935 "rw_ios_per_sec": 0, 00:09:28.935 "rw_mbytes_per_sec": 0, 00:09:28.935 "r_mbytes_per_sec": 0, 00:09:28.935 "w_mbytes_per_sec": 0 00:09:28.935 }, 00:09:28.935 "claimed": false, 00:09:28.935 "zoned": false, 00:09:28.935 "supported_io_types": { 00:09:28.935 "read": true, 00:09:28.935 "write": true, 00:09:28.935 "unmap": true, 00:09:28.935 "flush": true, 00:09:28.935 "reset": true, 00:09:28.935 "nvme_admin": true, 00:09:28.935 "nvme_io": true, 00:09:28.935 "nvme_io_md": false, 00:09:28.935 "write_zeroes": true, 00:09:28.935 "zcopy": false, 00:09:28.935 "get_zone_info": false, 00:09:28.935 "zone_management": false, 00:09:28.935 "zone_append": false, 00:09:28.935 "compare": true, 00:09:28.935 "compare_and_write": true, 00:09:28.935 "abort": true, 00:09:28.935 "seek_hole": false, 00:09:28.935 "seek_data": false, 00:09:28.935 "copy": true, 00:09:28.935 "nvme_iov_md": false 00:09:28.935 }, 00:09:28.935 "memory_domains": [ 00:09:28.935 { 00:09:28.935 "dma_device_id": "system", 00:09:28.935 "dma_device_type": 1 00:09:28.935 } 00:09:28.935 ], 00:09:28.935 "driver_specific": { 00:09:28.935 "nvme": [ 00:09:28.935 { 00:09:28.935 "trid": { 00:09:28.935 "trtype": "TCP", 00:09:28.935 "adrfam": "IPv4", 00:09:28.935 "traddr": "10.0.0.2", 00:09:28.935 "trsvcid": "4420", 00:09:28.935 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:28.935 }, 00:09:28.935 "ctrlr_data": { 00:09:28.935 "cntlid": 1, 00:09:28.935 "vendor_id": "0x8086", 00:09:28.935 "model_number": "SPDK bdev Controller", 00:09:28.935 "serial_number": "SPDK0", 00:09:28.935 "firmware_revision": "25.01", 00:09:28.935 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:28.935 "oacs": { 00:09:28.935 "security": 0, 00:09:28.935 "format": 0, 00:09:28.935 "firmware": 0, 00:09:28.935 "ns_manage": 0 00:09:28.935 }, 00:09:28.935 "multi_ctrlr": true, 00:09:28.935 "ana_reporting": false 00:09:28.935 }, 00:09:28.935 "vs": { 00:09:28.935 "nvme_version": "1.3" 00:09:28.935 }, 00:09:28.935 "ns_data": { 00:09:28.935 "id": 1, 00:09:28.935 "can_share": true 00:09:28.935 } 00:09:28.935 } 00:09:28.935 ], 00:09:28.935 "mp_policy": "active_passive" 00:09:28.935 } 00:09:28.935 } 00:09:28.935 ] 00:09:28.935 09:27:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=153262 00:09:28.935 09:27:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:28.936 09:27:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:28.936 Running I/O for 10 seconds... 00:09:29.879 Latency(us) 00:09:29.879 [2024-11-19T08:27:16.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.879 Nvme0n1 : 1.00 24329.00 95.04 0.00 0.00 0.00 0.00 0.00 00:09:29.879 [2024-11-19T08:27:16.627Z] =================================================================================================================== 00:09:29.879 [2024-11-19T08:27:16.627Z] Total : 24329.00 95.04 0.00 0.00 0.00 0.00 0.00 00:09:29.879 00:09:30.821 09:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c90b6a75-f4d5-402f-8ef8-a3e80010fcea 00:09:30.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.821 Nvme0n1 : 2.00 24256.50 94.75 0.00 0.00 0.00 0.00 0.00 00:09:30.821 [2024-11-19T08:27:17.569Z] =================================================================================================================== 00:09:30.821 [2024-11-19T08:27:17.569Z] Total : 24256.50 94.75 0.00 0.00 0.00 0.00 0.00 00:09:30.821 00:09:31.083 true 00:09:31.083 09:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c90b6a75-f4d5-402f-8ef8-a3e80010fcea 00:09:31.083 09:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:31.343 09:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:31.343 09:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:31.343 09:27:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 153262 00:09:31.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.915 Nvme0n1 : 3.00 24245.67 94.71 0.00 0.00 0.00 0.00 0.00 00:09:31.915 [2024-11-19T08:27:18.663Z] =================================================================================================================== 00:09:31.915 [2024-11-19T08:27:18.663Z] Total : 24245.67 94.71 0.00 0.00 0.00 0.00 0.00 00:09:31.915 00:09:32.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.859 Nvme0n1 : 4.00 24262.25 94.77 0.00 0.00 0.00 0.00 0.00 00:09:32.859 [2024-11-19T08:27:19.607Z] =================================================================================================================== 00:09:32.859 [2024-11-19T08:27:19.607Z] Total : 24262.25 94.77 0.00 0.00 0.00 0.00 0.00 00:09:32.859 00:09:34.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.248 Nvme0n1 : 5.00 24269.00 94.80 0.00 0.00 0.00 0.00 0.00 00:09:34.248 [2024-11-19T08:27:20.996Z] =================================================================================================================== 00:09:34.248 [2024-11-19T08:27:20.996Z] Total : 24269.00 94.80 0.00 0.00 0.00 0.00 0.00 00:09:34.248 00:09:35.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.191 Nvme0n1 : 6.00 24286.83 94.87 0.00 0.00 0.00 0.00 0.00 00:09:35.191 [2024-11-19T08:27:21.939Z] =================================================================================================================== 00:09:35.191 [2024-11-19T08:27:21.939Z] Total : 24286.83 94.87 0.00 0.00 0.00 0.00 0.00 00:09:35.191 00:09:36.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.133 Nvme0n1 : 7.00 24304.14 94.94 0.00 0.00 0.00 0.00 0.00 00:09:36.133 [2024-11-19T08:27:22.881Z] =================================================================================================================== 00:09:36.133 [2024-11-19T08:27:22.881Z] Total : 24304.14 94.94 0.00 0.00 0.00 0.00 0.00 00:09:36.133 00:09:37.077 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.077 Nvme0n1 : 8.00 24318.12 94.99 0.00 0.00 0.00 0.00 0.00 00:09:37.077 [2024-11-19T08:27:23.825Z] =================================================================================================================== 00:09:37.077 [2024-11-19T08:27:23.825Z] Total : 24318.12 94.99 0.00 0.00 0.00 0.00 0.00 00:09:37.077 00:09:38.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.020 Nvme0n1 : 9.00 24330.78 95.04 0.00 0.00 0.00 0.00 0.00 00:09:38.020 [2024-11-19T08:27:24.768Z] =================================================================================================================== 00:09:38.020 [2024-11-19T08:27:24.768Z] Total : 24330.78 95.04 0.00 0.00 0.00 0.00 0.00 00:09:38.020 00:09:38.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.967 Nvme0n1 : 10.00 24339.30 95.08 0.00 0.00 0.00 0.00 0.00 00:09:38.967 [2024-11-19T08:27:25.715Z] =================================================================================================================== 00:09:38.967 [2024-11-19T08:27:25.715Z] Total : 24339.30 95.08 0.00 0.00 0.00 0.00 0.00 00:09:38.967 00:09:38.967 00:09:38.967 Latency(us) 00:09:38.967 [2024-11-19T08:27:25.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.967 Nvme0n1 : 10.01 24339.46 95.08 0.00 0.00 5254.94 2034.35 8683.52 00:09:38.967 [2024-11-19T08:27:25.715Z] =================================================================================================================== 00:09:38.967 [2024-11-19T08:27:25.715Z] Total : 24339.46 95.08 0.00 0.00 5254.94 2034.35 8683.52 00:09:38.967 { 00:09:38.967 "results": [ 00:09:38.967 { 00:09:38.967 "job": "Nvme0n1", 00:09:38.967 "core_mask": "0x2", 00:09:38.967 "workload": "randwrite", 00:09:38.967 "status": "finished", 00:09:38.967 "queue_depth": 128, 00:09:38.967 "io_size": 4096, 00:09:38.967 "runtime": 10.005193, 00:09:38.967 "iops": 24339.460518152922, 00:09:38.967 "mibps": 95.07601764903485, 00:09:38.967 "io_failed": 0, 00:09:38.967 "io_timeout": 0, 00:09:38.967 "avg_latency_us": 5254.9386942399215, 00:09:38.967 "min_latency_us": 2034.3466666666666, 00:09:38.967 "max_latency_us": 8683.52 00:09:38.967 } 00:09:38.967 ], 00:09:38.967 "core_count": 1 00:09:38.967 } 00:09:38.967 09:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 152925 00:09:38.967 09:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 152925 ']' 00:09:38.967 09:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 152925 00:09:38.967 09:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:38.967 09:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.967 09:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 152925 00:09:38.967 09:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:38.967 09:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:38.967 09:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 152925' 00:09:38.967 killing process with pid 152925 00:09:38.967 09:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 152925 00:09:38.967 Received shutdown signal, test time was about 10.000000 seconds 00:09:38.967 00:09:38.967 Latency(us) 00:09:38.967 [2024-11-19T08:27:25.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.967 [2024-11-19T08:27:25.715Z] =================================================================================================================== 00:09:38.967 [2024-11-19T08:27:25.715Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:38.967 09:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 152925 00:09:39.228 09:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:39.228 09:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:39.489 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c90b6a75-f4d5-402f-8ef8-a3e80010fcea 00:09:39.489 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:39.750 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:39.750 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:39.750 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:39.750 [2024-11-19 09:27:26.426575] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:39.750 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c90b6a75-f4d5-402f-8ef8-a3e80010fcea 00:09:39.750 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:39.750 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c90b6a75-f4d5-402f-8ef8-a3e80010fcea 00:09:39.750 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.750 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.750 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.750 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.751 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.751 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.751 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.751 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:39.751 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c90b6a75-f4d5-402f-8ef8-a3e80010fcea 00:09:40.013 request: 00:09:40.013 { 00:09:40.013 "uuid": "c90b6a75-f4d5-402f-8ef8-a3e80010fcea", 00:09:40.013 "method": "bdev_lvol_get_lvstores", 00:09:40.013 "req_id": 1 00:09:40.013 } 00:09:40.013 Got JSON-RPC error response 00:09:40.013 response: 00:09:40.013 { 00:09:40.013 "code": -19, 00:09:40.013 "message": "No such device" 00:09:40.013 } 00:09:40.013 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:40.013 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:40.013 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:40.013 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:40.013 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:40.275 aio_bdev 00:09:40.275 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d7c3dbc9-8da4-434a-aa5a-975fe67c8db1 00:09:40.275 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=d7c3dbc9-8da4-434a-aa5a-975fe67c8db1 00:09:40.275 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.275 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:40.275 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.275 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.275 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:40.275 09:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d7c3dbc9-8da4-434a-aa5a-975fe67c8db1 -t 2000 00:09:40.536 [ 00:09:40.536 { 00:09:40.536 "name": "d7c3dbc9-8da4-434a-aa5a-975fe67c8db1", 00:09:40.536 "aliases": [ 00:09:40.536 "lvs/lvol" 00:09:40.536 ], 00:09:40.536 "product_name": "Logical Volume", 00:09:40.536 "block_size": 4096, 00:09:40.536 "num_blocks": 38912, 00:09:40.536 "uuid": "d7c3dbc9-8da4-434a-aa5a-975fe67c8db1", 00:09:40.536 "assigned_rate_limits": { 00:09:40.536 "rw_ios_per_sec": 0, 00:09:40.536 "rw_mbytes_per_sec": 0, 00:09:40.536 "r_mbytes_per_sec": 0, 00:09:40.536 "w_mbytes_per_sec": 0 00:09:40.536 }, 00:09:40.536 "claimed": false, 00:09:40.536 "zoned": false, 00:09:40.536 "supported_io_types": { 00:09:40.536 "read": true, 00:09:40.536 "write": true, 00:09:40.536 "unmap": true, 00:09:40.536 "flush": false, 00:09:40.536 "reset": true, 00:09:40.536 "nvme_admin": false, 00:09:40.536 "nvme_io": false, 00:09:40.536 "nvme_io_md": false, 00:09:40.536 "write_zeroes": true, 00:09:40.536 "zcopy": false, 00:09:40.536 "get_zone_info": false, 00:09:40.536 "zone_management": false, 00:09:40.536 "zone_append": false, 00:09:40.536 "compare": false, 00:09:40.536 "compare_and_write": false, 00:09:40.536 "abort": false, 00:09:40.536 "seek_hole": true, 00:09:40.536 "seek_data": true, 00:09:40.536 "copy": false, 00:09:40.536 "nvme_iov_md": false 00:09:40.536 }, 00:09:40.536 "driver_specific": { 00:09:40.536 "lvol": { 00:09:40.536 "lvol_store_uuid": "c90b6a75-f4d5-402f-8ef8-a3e80010fcea", 00:09:40.536 "base_bdev": "aio_bdev", 00:09:40.536 "thin_provision": false, 00:09:40.536 "num_allocated_clusters": 38, 00:09:40.536 "snapshot": false, 00:09:40.536 "clone": false, 00:09:40.536 "esnap_clone": false 00:09:40.536 } 00:09:40.536 } 00:09:40.536 } 00:09:40.536 ] 00:09:40.536 09:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:40.536 09:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c90b6a75-f4d5-402f-8ef8-a3e80010fcea 00:09:40.536 09:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:40.798 09:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:40.798 09:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c90b6a75-f4d5-402f-8ef8-a3e80010fcea 00:09:40.798 09:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:40.798 09:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:40.798 09:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d7c3dbc9-8da4-434a-aa5a-975fe67c8db1 00:09:41.059 09:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c90b6a75-f4d5-402f-8ef8-a3e80010fcea 00:09:41.321 09:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:41.321 09:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:41.321 00:09:41.321 real 0m15.916s 00:09:41.321 user 0m15.590s 00:09:41.321 sys 0m1.502s 00:09:41.321 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.321 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:41.321 ************************************ 00:09:41.321 END TEST lvs_grow_clean 00:09:41.321 ************************************ 00:09:41.321 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:41.321 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:41.321 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.321 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:41.582 ************************************ 00:09:41.582 START TEST lvs_grow_dirty 00:09:41.582 ************************************ 00:09:41.582 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:41.582 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:41.582 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:41.582 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:41.583 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:41.583 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:41.583 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:41.583 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:41.583 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:41.583 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:41.583 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:41.583 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:41.844 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=577d0fb3-f6c4-44eb-ad68-8e01d184972f 00:09:41.844 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 577d0fb3-f6c4-44eb-ad68-8e01d184972f 00:09:41.844 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:42.105 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:42.105 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:42.105 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 577d0fb3-f6c4-44eb-ad68-8e01d184972f lvol 150 00:09:42.105 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7f7820c7-1e41-4db5-b31e-c6487b9aa4da 00:09:42.105 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:42.105 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:42.366 [2024-11-19 09:27:28.985371] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:42.366 [2024-11-19 09:27:28.985411] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:42.366 true 00:09:42.366 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 577d0fb3-f6c4-44eb-ad68-8e01d184972f 00:09:42.366 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:42.634 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:42.634 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:42.634 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7f7820c7-1e41-4db5-b31e-c6487b9aa4da 00:09:42.902 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:43.163 [2024-11-19 09:27:29.659315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.163 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:43.163 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=156050 00:09:43.163 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:43.163 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 156050 /var/tmp/bdevperf.sock 00:09:43.163 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:43.163 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 156050 ']' 00:09:43.164 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:43.164 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.164 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:43.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:43.164 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.164 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:43.164 [2024-11-19 09:27:29.874892] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:43.164 [2024-11-19 09:27:29.874947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156050 ] 00:09:43.425 [2024-11-19 09:27:29.959305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.425 [2024-11-19 09:27:29.989360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.998 09:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.998 09:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:43.998 09:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:44.260 Nvme0n1 00:09:44.260 09:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:44.521 [ 00:09:44.521 { 00:09:44.521 "name": "Nvme0n1", 00:09:44.521 "aliases": [ 00:09:44.521 "7f7820c7-1e41-4db5-b31e-c6487b9aa4da" 00:09:44.521 ], 00:09:44.521 "product_name": "NVMe disk", 00:09:44.521 "block_size": 4096, 00:09:44.521 "num_blocks": 38912, 00:09:44.521 "uuid": "7f7820c7-1e41-4db5-b31e-c6487b9aa4da", 00:09:44.521 "numa_id": 0, 00:09:44.521 "assigned_rate_limits": { 00:09:44.521 "rw_ios_per_sec": 0, 00:09:44.521 "rw_mbytes_per_sec": 0, 00:09:44.521 "r_mbytes_per_sec": 0, 00:09:44.521 "w_mbytes_per_sec": 0 00:09:44.521 }, 00:09:44.521 "claimed": false, 00:09:44.521 "zoned": false, 00:09:44.521 "supported_io_types": { 00:09:44.521 "read": true, 00:09:44.521 "write": true, 00:09:44.521 "unmap": true, 00:09:44.521 "flush": true, 00:09:44.521 "reset": true, 00:09:44.521 "nvme_admin": true, 00:09:44.521 "nvme_io": true, 00:09:44.521 "nvme_io_md": false, 00:09:44.521 "write_zeroes": true, 00:09:44.521 "zcopy": false, 00:09:44.521 "get_zone_info": false, 00:09:44.521 "zone_management": false, 00:09:44.521 "zone_append": false, 00:09:44.521 "compare": true, 00:09:44.521 "compare_and_write": true, 00:09:44.521 "abort": true, 00:09:44.521 "seek_hole": false, 00:09:44.521 "seek_data": false, 00:09:44.521 "copy": true, 00:09:44.521 "nvme_iov_md": false 00:09:44.521 }, 00:09:44.521 "memory_domains": [ 00:09:44.521 { 00:09:44.521 "dma_device_id": "system", 00:09:44.521 "dma_device_type": 1 00:09:44.521 } 00:09:44.521 ], 00:09:44.521 "driver_specific": { 00:09:44.521 "nvme": [ 00:09:44.521 { 00:09:44.521 "trid": { 00:09:44.521 "trtype": "TCP", 00:09:44.521 "adrfam": "IPv4", 00:09:44.521 "traddr": "10.0.0.2", 00:09:44.521 "trsvcid": "4420", 00:09:44.521 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:44.521 }, 00:09:44.521 "ctrlr_data": { 00:09:44.521 "cntlid": 1, 00:09:44.521 "vendor_id": "0x8086", 00:09:44.521 "model_number": "SPDK bdev Controller", 00:09:44.521 "serial_number": "SPDK0", 00:09:44.521 "firmware_revision": "25.01", 00:09:44.521 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:44.521 "oacs": { 00:09:44.521 "security": 0, 00:09:44.521 "format": 0, 00:09:44.521 "firmware": 0, 00:09:44.521 "ns_manage": 0 00:09:44.521 }, 00:09:44.521 "multi_ctrlr": true, 00:09:44.521 "ana_reporting": false 00:09:44.521 }, 00:09:44.521 "vs": { 00:09:44.521 "nvme_version": "1.3" 00:09:44.521 }, 00:09:44.521 "ns_data": { 00:09:44.521 "id": 1, 00:09:44.521 "can_share": true 00:09:44.521 } 00:09:44.521 } 00:09:44.521 ], 00:09:44.521 "mp_policy": "active_passive" 00:09:44.521 } 00:09:44.521 } 00:09:44.521 ] 00:09:44.521 09:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=156377 00:09:44.521 09:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:44.521 09:27:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:44.521 Running I/O for 10 seconds... 00:09:45.906 Latency(us) 00:09:45.906 [2024-11-19T08:27:32.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.906 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.906 Nvme0n1 : 1.00 25102.00 98.05 0.00 0.00 0.00 0.00 0.00 00:09:45.906 [2024-11-19T08:27:32.654Z] =================================================================================================================== 00:09:45.906 [2024-11-19T08:27:32.654Z] Total : 25102.00 98.05 0.00 0.00 0.00 0.00 0.00 00:09:45.906 00:09:46.479 09:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 577d0fb3-f6c4-44eb-ad68-8e01d184972f 00:09:46.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.740 Nvme0n1 : 2.00 25255.50 98.65 0.00 0.00 0.00 0.00 0.00 00:09:46.740 [2024-11-19T08:27:33.488Z] =================================================================================================================== 00:09:46.740 [2024-11-19T08:27:33.488Z] Total : 25255.50 98.65 0.00 0.00 0.00 0.00 0.00 00:09:46.740 00:09:46.740 true 00:09:46.740 09:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 577d0fb3-f6c4-44eb-ad68-8e01d184972f 00:09:46.740 09:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:47.002 09:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:47.002 09:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:47.002 09:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 156377 00:09:47.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.573 Nvme0n1 : 3.00 25327.00 98.93 0.00 0.00 0.00 0.00 0.00 00:09:47.573 [2024-11-19T08:27:34.321Z] =================================================================================================================== 00:09:47.573 [2024-11-19T08:27:34.321Z] Total : 25327.00 98.93 0.00 0.00 0.00 0.00 0.00 00:09:47.573 00:09:48.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.515 Nvme0n1 : 4.00 25378.75 99.14 0.00 0.00 0.00 0.00 0.00 00:09:48.515 [2024-11-19T08:27:35.263Z] =================================================================================================================== 00:09:48.515 [2024-11-19T08:27:35.263Z] Total : 25378.75 99.14 0.00 0.00 0.00 0.00 0.00 00:09:48.515 00:09:49.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.896 Nvme0n1 : 5.00 25410.20 99.26 0.00 0.00 0.00 0.00 0.00 00:09:49.896 [2024-11-19T08:27:36.644Z] =================================================================================================================== 00:09:49.896 [2024-11-19T08:27:36.644Z] Total : 25410.20 99.26 0.00 0.00 0.00 0.00 0.00 00:09:49.896 00:09:50.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.839 Nvme0n1 : 6.00 25433.83 99.35 0.00 0.00 0.00 0.00 0.00 00:09:50.839 [2024-11-19T08:27:37.587Z] =================================================================================================================== 00:09:50.839 [2024-11-19T08:27:37.587Z] Total : 25433.83 99.35 0.00 0.00 0.00 0.00 0.00 00:09:50.839 00:09:51.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.782 Nvme0n1 : 7.00 25455.14 99.43 0.00 0.00 0.00 0.00 0.00 00:09:51.782 [2024-11-19T08:27:38.530Z] =================================================================================================================== 00:09:51.782 [2024-11-19T08:27:38.530Z] Total : 25455.14 99.43 0.00 0.00 0.00 0.00 0.00 00:09:51.782 00:09:52.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.724 Nvme0n1 : 8.00 25473.12 99.50 0.00 0.00 0.00 0.00 0.00 00:09:52.724 [2024-11-19T08:27:39.472Z] =================================================================================================================== 00:09:52.724 [2024-11-19T08:27:39.472Z] Total : 25473.12 99.50 0.00 0.00 0.00 0.00 0.00 00:09:52.724 00:09:53.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.667 Nvme0n1 : 9.00 25485.11 99.55 0.00 0.00 0.00 0.00 0.00 00:09:53.667 [2024-11-19T08:27:40.415Z] =================================================================================================================== 00:09:53.667 [2024-11-19T08:27:40.415Z] Total : 25485.11 99.55 0.00 0.00 0.00 0.00 0.00 00:09:53.667 00:09:54.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.611 Nvme0n1 : 10.00 25494.10 99.59 0.00 0.00 0.00 0.00 0.00 00:09:54.611 [2024-11-19T08:27:41.360Z] =================================================================================================================== 00:09:54.612 [2024-11-19T08:27:41.360Z] Total : 25494.10 99.59 0.00 0.00 0.00 0.00 0.00 00:09:54.612 00:09:54.612 00:09:54.612 Latency(us) 00:09:54.612 [2024-11-19T08:27:41.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.612 Nvme0n1 : 10.00 25497.62 99.60 0.00 0.00 5017.02 3072.00 9393.49 00:09:54.612 [2024-11-19T08:27:41.360Z] =================================================================================================================== 00:09:54.612 [2024-11-19T08:27:41.360Z] Total : 25497.62 99.60 0.00 0.00 5017.02 3072.00 9393.49 00:09:54.612 { 00:09:54.612 "results": [ 00:09:54.612 { 00:09:54.612 "job": "Nvme0n1", 00:09:54.612 "core_mask": "0x2", 00:09:54.612 "workload": "randwrite", 00:09:54.612 "status": "finished", 00:09:54.612 "queue_depth": 128, 00:09:54.612 "io_size": 4096, 00:09:54.612 "runtime": 10.003639, 00:09:54.612 "iops": 25497.621415566875, 00:09:54.612 "mibps": 99.6000836545581, 00:09:54.612 "io_failed": 0, 00:09:54.612 "io_timeout": 0, 00:09:54.612 "avg_latency_us": 5017.017347724211, 00:09:54.612 "min_latency_us": 3072.0, 00:09:54.612 "max_latency_us": 9393.493333333334 00:09:54.612 } 00:09:54.612 ], 00:09:54.612 "core_count": 1 00:09:54.612 } 00:09:54.612 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 156050 00:09:54.612 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 156050 ']' 00:09:54.612 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 156050 00:09:54.612 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:54.612 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.612 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 156050 00:09:54.612 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:54.612 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:54.612 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 156050' 00:09:54.612 killing process with pid 156050 00:09:54.612 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 156050 00:09:54.612 Received shutdown signal, test time was about 10.000000 seconds 00:09:54.612 00:09:54.612 Latency(us) 00:09:54.612 [2024-11-19T08:27:41.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.612 [2024-11-19T08:27:41.360Z] =================================================================================================================== 00:09:54.612 [2024-11-19T08:27:41.360Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:54.612 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 156050 00:09:54.873 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:54.873 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:55.134 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 577d0fb3-f6c4-44eb-ad68-8e01d184972f 00:09:55.134 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:55.396 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:55.396 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:55.396 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 152121 00:09:55.396 09:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 152121 00:09:55.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 152121 Killed "${NVMF_APP[@]}" "$@" 00:09:55.396 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:55.396 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:55.396 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:55.396 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:55.396 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:55.396 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=158521 00:09:55.396 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 158521 00:09:55.396 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:55.396 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 158521 ']' 00:09:55.396 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.396 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.396 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.396 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.396 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:55.396 [2024-11-19 09:27:42.065880] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:55.397 [2024-11-19 09:27:42.065935] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.659 [2024-11-19 09:27:42.157862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.659 [2024-11-19 09:27:42.188003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.659 [2024-11-19 09:27:42.188030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.659 [2024-11-19 09:27:42.188035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.659 [2024-11-19 09:27:42.188041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.659 [2024-11-19 09:27:42.188045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.659 [2024-11-19 09:27:42.188506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.231 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.231 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:56.231 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:56.231 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:56.231 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:56.231 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.232 09:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:56.492 [2024-11-19 09:27:43.049773] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:56.492 [2024-11-19 09:27:43.049844] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:56.492 [2024-11-19 09:27:43.049865] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:56.492 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:56.492 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7f7820c7-1e41-4db5-b31e-c6487b9aa4da 00:09:56.492 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7f7820c7-1e41-4db5-b31e-c6487b9aa4da 00:09:56.492 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:56.492 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:56.492 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:56.492 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:56.492 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:56.492 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7f7820c7-1e41-4db5-b31e-c6487b9aa4da -t 2000 00:09:56.755 [ 00:09:56.755 { 00:09:56.755 "name": "7f7820c7-1e41-4db5-b31e-c6487b9aa4da", 00:09:56.755 "aliases": [ 00:09:56.755 "lvs/lvol" 00:09:56.755 ], 00:09:56.755 "product_name": "Logical Volume", 00:09:56.755 "block_size": 4096, 00:09:56.755 "num_blocks": 38912, 00:09:56.755 "uuid": "7f7820c7-1e41-4db5-b31e-c6487b9aa4da", 00:09:56.755 "assigned_rate_limits": { 00:09:56.755 "rw_ios_per_sec": 0, 00:09:56.755 "rw_mbytes_per_sec": 0, 00:09:56.755 "r_mbytes_per_sec": 0, 00:09:56.755 "w_mbytes_per_sec": 0 00:09:56.755 }, 00:09:56.755 "claimed": false, 00:09:56.755 "zoned": false, 00:09:56.755 "supported_io_types": { 00:09:56.755 "read": true, 00:09:56.755 "write": true, 00:09:56.755 "unmap": true, 00:09:56.755 "flush": false, 00:09:56.755 "reset": true, 00:09:56.755 "nvme_admin": false, 00:09:56.755 "nvme_io": false, 00:09:56.755 "nvme_io_md": false, 00:09:56.755 "write_zeroes": true, 00:09:56.755 "zcopy": false, 00:09:56.755 "get_zone_info": false, 00:09:56.755 "zone_management": false, 00:09:56.755 "zone_append": false, 00:09:56.755 "compare": false, 00:09:56.755 "compare_and_write": false, 00:09:56.755 "abort": false, 00:09:56.755 "seek_hole": true, 00:09:56.755 "seek_data": true, 00:09:56.755 "copy": false, 00:09:56.755 "nvme_iov_md": false 00:09:56.755 }, 00:09:56.755 "driver_specific": { 00:09:56.755 "lvol": { 00:09:56.755 "lvol_store_uuid": "577d0fb3-f6c4-44eb-ad68-8e01d184972f", 00:09:56.755 "base_bdev": "aio_bdev", 00:09:56.755 "thin_provision": false, 00:09:56.755 "num_allocated_clusters": 38, 00:09:56.755 "snapshot": false, 00:09:56.755 "clone": false, 00:09:56.755 "esnap_clone": false 00:09:56.755 } 00:09:56.755 } 00:09:56.755 } 00:09:56.755 ] 00:09:56.755 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:56.755 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 577d0fb3-f6c4-44eb-ad68-8e01d184972f 00:09:56.755 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:57.017 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:57.017 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 577d0fb3-f6c4-44eb-ad68-8e01d184972f 00:09:57.017 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:57.017 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:57.017 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:57.278 [2024-11-19 09:27:43.866377] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:57.278 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 577d0fb3-f6c4-44eb-ad68-8e01d184972f 00:09:57.278 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:57.278 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 577d0fb3-f6c4-44eb-ad68-8e01d184972f 00:09:57.278 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:57.278 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.278 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:57.278 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.278 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:57.278 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.278 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:57.278 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:57.278 09:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 577d0fb3-f6c4-44eb-ad68-8e01d184972f 00:09:57.538 request: 00:09:57.538 { 00:09:57.538 "uuid": "577d0fb3-f6c4-44eb-ad68-8e01d184972f", 00:09:57.538 "method": "bdev_lvol_get_lvstores", 00:09:57.538 "req_id": 1 00:09:57.538 } 00:09:57.538 Got JSON-RPC error response 00:09:57.538 response: 00:09:57.538 { 00:09:57.538 "code": -19, 00:09:57.538 "message": "No such device" 00:09:57.538 } 00:09:57.538 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:57.538 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:57.538 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:57.538 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:57.538 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:57.538 aio_bdev 00:09:57.538 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7f7820c7-1e41-4db5-b31e-c6487b9aa4da 00:09:57.538 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7f7820c7-1e41-4db5-b31e-c6487b9aa4da 00:09:57.538 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.538 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:57.538 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.538 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.538 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:57.799 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7f7820c7-1e41-4db5-b31e-c6487b9aa4da -t 2000 00:09:58.060 [ 00:09:58.060 { 00:09:58.060 "name": "7f7820c7-1e41-4db5-b31e-c6487b9aa4da", 00:09:58.060 "aliases": [ 00:09:58.060 "lvs/lvol" 00:09:58.060 ], 00:09:58.060 "product_name": "Logical Volume", 00:09:58.060 "block_size": 4096, 00:09:58.060 "num_blocks": 38912, 00:09:58.060 "uuid": "7f7820c7-1e41-4db5-b31e-c6487b9aa4da", 00:09:58.060 "assigned_rate_limits": { 00:09:58.060 "rw_ios_per_sec": 0, 00:09:58.060 "rw_mbytes_per_sec": 0, 00:09:58.060 "r_mbytes_per_sec": 0, 00:09:58.060 "w_mbytes_per_sec": 0 00:09:58.060 }, 00:09:58.060 "claimed": false, 00:09:58.060 "zoned": false, 00:09:58.060 "supported_io_types": { 00:09:58.060 "read": true, 00:09:58.060 "write": true, 00:09:58.060 "unmap": true, 00:09:58.060 "flush": false, 00:09:58.060 "reset": true, 00:09:58.060 "nvme_admin": false, 00:09:58.060 "nvme_io": false, 00:09:58.060 "nvme_io_md": false, 00:09:58.060 "write_zeroes": true, 00:09:58.060 "zcopy": false, 00:09:58.060 "get_zone_info": false, 00:09:58.060 "zone_management": false, 00:09:58.060 "zone_append": false, 00:09:58.060 "compare": false, 00:09:58.060 "compare_and_write": false, 00:09:58.060 "abort": false, 00:09:58.060 "seek_hole": true, 00:09:58.060 "seek_data": true, 00:09:58.060 "copy": false, 00:09:58.060 "nvme_iov_md": false 00:09:58.060 }, 00:09:58.060 "driver_specific": { 00:09:58.060 "lvol": { 00:09:58.060 "lvol_store_uuid": "577d0fb3-f6c4-44eb-ad68-8e01d184972f", 00:09:58.060 "base_bdev": "aio_bdev", 00:09:58.060 "thin_provision": false, 00:09:58.060 "num_allocated_clusters": 38, 00:09:58.060 "snapshot": false, 00:09:58.060 "clone": false, 00:09:58.060 "esnap_clone": false 00:09:58.060 } 00:09:58.060 } 00:09:58.060 } 00:09:58.060 ] 00:09:58.060 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:58.060 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 577d0fb3-f6c4-44eb-ad68-8e01d184972f 00:09:58.060 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:58.060 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:58.060 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 577d0fb3-f6c4-44eb-ad68-8e01d184972f 00:09:58.060 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:58.321 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:58.321 09:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7f7820c7-1e41-4db5-b31e-c6487b9aa4da 00:09:58.582 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 577d0fb3-f6c4-44eb-ad68-8e01d184972f 00:09:58.582 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:58.843 00:09:58.843 real 0m17.391s 00:09:58.843 user 0m45.732s 00:09:58.843 sys 0m2.977s 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:58.843 ************************************ 00:09:58.843 END TEST lvs_grow_dirty 00:09:58.843 ************************************ 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:58.843 nvmf_trace.0 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.843 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.843 rmmod nvme_tcp 00:09:59.105 rmmod nvme_fabrics 00:09:59.105 rmmod nvme_keyring 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 158521 ']' 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 158521 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 158521 ']' 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 158521 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 158521 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 158521' 00:09:59.105 killing process with pid 158521 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 158521 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 158521 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.105 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.651 09:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:01.651 00:10:01.651 real 0m44.669s 00:10:01.651 user 1m7.601s 00:10:01.651 sys 0m10.604s 00:10:01.651 09:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.651 09:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:01.651 ************************************ 00:10:01.651 END TEST nvmf_lvs_grow 00:10:01.651 ************************************ 00:10:01.651 09:27:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:01.651 09:27:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:01.651 09:27:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.651 09:27:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.651 ************************************ 00:10:01.651 START TEST nvmf_bdev_io_wait 00:10:01.651 ************************************ 00:10:01.651 09:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:01.651 * Looking for test storage... 00:10:01.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:01.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.651 --rc genhtml_branch_coverage=1 00:10:01.651 --rc genhtml_function_coverage=1 00:10:01.651 --rc genhtml_legend=1 00:10:01.651 --rc geninfo_all_blocks=1 00:10:01.651 --rc geninfo_unexecuted_blocks=1 00:10:01.651 00:10:01.651 ' 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:01.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.651 --rc genhtml_branch_coverage=1 00:10:01.651 --rc genhtml_function_coverage=1 00:10:01.651 --rc genhtml_legend=1 00:10:01.651 --rc geninfo_all_blocks=1 00:10:01.651 --rc geninfo_unexecuted_blocks=1 00:10:01.651 00:10:01.651 ' 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:01.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.651 --rc genhtml_branch_coverage=1 00:10:01.651 --rc genhtml_function_coverage=1 00:10:01.651 --rc genhtml_legend=1 00:10:01.651 --rc geninfo_all_blocks=1 00:10:01.651 --rc geninfo_unexecuted_blocks=1 00:10:01.651 00:10:01.651 ' 00:10:01.651 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:01.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.651 --rc genhtml_branch_coverage=1 00:10:01.651 --rc genhtml_function_coverage=1 00:10:01.651 --rc genhtml_legend=1 00:10:01.651 --rc geninfo_all_blocks=1 00:10:01.652 --rc geninfo_unexecuted_blocks=1 00:10:01.652 00:10:01.652 ' 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:01.652 09:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:09.798 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.798 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:09.799 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:09.799 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:09.799 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:09.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:10:09.799 00:10:09.799 --- 10.0.0.2 ping statistics --- 00:10:09.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.799 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:09.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:10:09.799 00:10:09.799 --- 10.0.0.1 ping statistics --- 00:10:09.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.799 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=163497 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 163497 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 163497 ']' 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.799 09:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.799 [2024-11-19 09:27:55.670885] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:09.799 [2024-11-19 09:27:55.670951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.799 [2024-11-19 09:27:55.772852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:09.799 [2024-11-19 09:27:55.828459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.799 [2024-11-19 09:27:55.828511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.799 [2024-11-19 09:27:55.828519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.799 [2024-11-19 09:27:55.828527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.799 [2024-11-19 09:27:55.828533] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.799 [2024-11-19 09:27:55.830491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.799 [2024-11-19 09:27:55.830625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.799 [2024-11-19 09:27:55.830777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.799 [2024-11-19 09:27:55.830777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.800 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.800 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:09.800 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:09.800 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:09.800 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.800 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.800 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:09.800 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.800 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:10.063 [2024-11-19 09:27:56.613673] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:10.063 Malloc0 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:10.063 [2024-11-19 09:27:56.679185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=163841 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=163843 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:10.063 { 00:10:10.063 "params": { 00:10:10.063 "name": "Nvme$subsystem", 00:10:10.063 "trtype": "$TEST_TRANSPORT", 00:10:10.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:10.063 "adrfam": "ipv4", 00:10:10.063 "trsvcid": "$NVMF_PORT", 00:10:10.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:10.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:10.063 "hdgst": ${hdgst:-false}, 00:10:10.063 "ddgst": ${ddgst:-false} 00:10:10.063 }, 00:10:10.063 "method": "bdev_nvme_attach_controller" 00:10:10.063 } 00:10:10.063 EOF 00:10:10.063 )") 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=163845 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:10.063 { 00:10:10.063 "params": { 00:10:10.063 "name": "Nvme$subsystem", 00:10:10.063 "trtype": "$TEST_TRANSPORT", 00:10:10.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:10.063 "adrfam": "ipv4", 00:10:10.063 "trsvcid": "$NVMF_PORT", 00:10:10.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:10.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:10.063 "hdgst": ${hdgst:-false}, 00:10:10.063 "ddgst": ${ddgst:-false} 00:10:10.063 }, 00:10:10.063 "method": "bdev_nvme_attach_controller" 00:10:10.063 } 00:10:10.063 EOF 00:10:10.063 )") 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=163848 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:10.063 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:10.063 { 00:10:10.063 "params": { 00:10:10.064 "name": "Nvme$subsystem", 00:10:10.064 "trtype": "$TEST_TRANSPORT", 00:10:10.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:10.064 "adrfam": "ipv4", 00:10:10.064 "trsvcid": "$NVMF_PORT", 00:10:10.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:10.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:10.064 "hdgst": ${hdgst:-false}, 00:10:10.064 "ddgst": ${ddgst:-false} 00:10:10.064 }, 00:10:10.064 "method": "bdev_nvme_attach_controller" 00:10:10.064 } 00:10:10.064 EOF 00:10:10.064 )") 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:10.064 { 00:10:10.064 "params": { 00:10:10.064 "name": "Nvme$subsystem", 00:10:10.064 "trtype": "$TEST_TRANSPORT", 00:10:10.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:10.064 "adrfam": "ipv4", 00:10:10.064 "trsvcid": "$NVMF_PORT", 00:10:10.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:10.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:10.064 "hdgst": ${hdgst:-false}, 00:10:10.064 "ddgst": ${ddgst:-false} 00:10:10.064 }, 00:10:10.064 "method": "bdev_nvme_attach_controller" 00:10:10.064 } 00:10:10.064 EOF 00:10:10.064 )") 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 163841 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:10.064 "params": { 00:10:10.064 "name": "Nvme1", 00:10:10.064 "trtype": "tcp", 00:10:10.064 "traddr": "10.0.0.2", 00:10:10.064 "adrfam": "ipv4", 00:10:10.064 "trsvcid": "4420", 00:10:10.064 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.064 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:10.064 "hdgst": false, 00:10:10.064 "ddgst": false 00:10:10.064 }, 00:10:10.064 "method": "bdev_nvme_attach_controller" 00:10:10.064 }' 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:10.064 "params": { 00:10:10.064 "name": "Nvme1", 00:10:10.064 "trtype": "tcp", 00:10:10.064 "traddr": "10.0.0.2", 00:10:10.064 "adrfam": "ipv4", 00:10:10.064 "trsvcid": "4420", 00:10:10.064 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.064 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:10.064 "hdgst": false, 00:10:10.064 "ddgst": false 00:10:10.064 }, 00:10:10.064 "method": "bdev_nvme_attach_controller" 00:10:10.064 }' 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:10.064 "params": { 00:10:10.064 "name": "Nvme1", 00:10:10.064 "trtype": "tcp", 00:10:10.064 "traddr": "10.0.0.2", 00:10:10.064 "adrfam": "ipv4", 00:10:10.064 "trsvcid": "4420", 00:10:10.064 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.064 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:10.064 "hdgst": false, 00:10:10.064 "ddgst": false 00:10:10.064 }, 00:10:10.064 "method": "bdev_nvme_attach_controller" 00:10:10.064 }' 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:10.064 09:27:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:10.064 "params": { 00:10:10.064 "name": "Nvme1", 00:10:10.064 "trtype": "tcp", 00:10:10.064 "traddr": "10.0.0.2", 00:10:10.064 "adrfam": "ipv4", 00:10:10.064 "trsvcid": "4420", 00:10:10.064 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.064 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:10.064 "hdgst": false, 00:10:10.064 "ddgst": false 00:10:10.064 }, 00:10:10.064 "method": "bdev_nvme_attach_controller" 00:10:10.064 }' 00:10:10.064 [2024-11-19 09:27:56.739022] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:10.064 [2024-11-19 09:27:56.739022] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:10.064 [2024-11-19 09:27:56.739026] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:10.064 [2024-11-19 09:27:56.739096] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 09:27:56.739097] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 [2024-11-19 09:27:56.739098] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:10.064 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:10.064 --proc-type=auto ] 00:10:10.064 [2024-11-19 09:27:56.743729] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:10.064 [2024-11-19 09:27:56.743798] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:10.326 [2024-11-19 09:27:56.959113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.326 [2024-11-19 09:27:56.999590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:10.326 [2024-11-19 09:27:57.053170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.587 [2024-11-19 09:27:57.092152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:10.587 [2024-11-19 09:27:57.150434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.587 [2024-11-19 09:27:57.193214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:10.587 [2024-11-19 09:27:57.201363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.587 [2024-11-19 09:27:57.239476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:10.850 Running I/O for 1 seconds... 00:10:10.850 Running I/O for 1 seconds... 00:10:10.850 Running I/O for 1 seconds... 00:10:10.850 Running I/O for 1 seconds... 00:10:11.794 13465.00 IOPS, 52.60 MiB/s 00:10:11.794 Latency(us) 00:10:11.794 [2024-11-19T08:27:58.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.794 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:11.794 Nvme1n1 : 1.01 13524.63 52.83 0.00 0.00 9433.08 4751.36 15619.41 00:10:11.794 [2024-11-19T08:27:58.542Z] =================================================================================================================== 00:10:11.794 [2024-11-19T08:27:58.542Z] Total : 13524.63 52.83 0.00 0.00 9433.08 4751.36 15619.41 00:10:11.794 5809.00 IOPS, 22.69 MiB/s 00:10:11.794 Latency(us) 00:10:11.794 [2024-11-19T08:27:58.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.794 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:11.794 Nvme1n1 : 1.02 5834.49 22.79 0.00 0.00 21731.09 6471.68 30146.56 00:10:11.794 [2024-11-19T08:27:58.542Z] =================================================================================================================== 00:10:11.794 [2024-11-19T08:27:58.542Z] Total : 5834.49 22.79 0.00 0.00 21731.09 6471.68 30146.56 00:10:11.794 187848.00 IOPS, 733.78 MiB/s 00:10:11.794 Latency(us) 00:10:11.794 [2024-11-19T08:27:58.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.794 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:11.794 Nvme1n1 : 1.00 187468.69 732.30 0.00 0.00 678.77 302.08 1979.73 00:10:11.794 [2024-11-19T08:27:58.542Z] =================================================================================================================== 00:10:11.794 [2024-11-19T08:27:58.542Z] Total : 187468.69 732.30 0.00 0.00 678.77 302.08 1979.73 00:10:11.794 5966.00 IOPS, 23.30 MiB/s 00:10:11.794 Latency(us) 00:10:11.794 [2024-11-19T08:27:58.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.794 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:11.794 Nvme1n1 : 1.01 6055.18 23.65 0.00 0.00 21058.89 5434.03 45875.20 00:10:11.794 [2024-11-19T08:27:58.542Z] =================================================================================================================== 00:10:11.794 [2024-11-19T08:27:58.542Z] Total : 6055.18 23.65 0.00 0.00 21058.89 5434.03 45875.20 00:10:11.794 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 163843 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 163845 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 163848 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:12.055 rmmod nvme_tcp 00:10:12.055 rmmod nvme_fabrics 00:10:12.055 rmmod nvme_keyring 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 163497 ']' 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 163497 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 163497 ']' 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 163497 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 163497 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 163497' 00:10:12.055 killing process with pid 163497 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 163497 00:10:12.055 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 163497 00:10:12.316 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:12.316 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:12.316 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:12.316 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:12.316 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:12.316 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:12.316 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:12.316 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:12.316 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:12.316 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.316 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.316 09:27:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.236 09:28:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:14.236 00:10:14.236 real 0m13.002s 00:10:14.236 user 0m19.647s 00:10:14.236 sys 0m7.380s 00:10:14.236 09:28:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.236 09:28:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.236 ************************************ 00:10:14.236 END TEST nvmf_bdev_io_wait 00:10:14.236 ************************************ 00:10:14.497 09:28:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:14.498 09:28:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:14.498 09:28:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.498 09:28:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.498 ************************************ 00:10:14.498 START TEST nvmf_queue_depth 00:10:14.498 ************************************ 00:10:14.498 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:14.498 * Looking for test storage... 00:10:14.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.498 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:14.498 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:14.498 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:14.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.760 --rc genhtml_branch_coverage=1 00:10:14.760 --rc genhtml_function_coverage=1 00:10:14.760 --rc genhtml_legend=1 00:10:14.760 --rc geninfo_all_blocks=1 00:10:14.760 --rc geninfo_unexecuted_blocks=1 00:10:14.760 00:10:14.760 ' 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:14.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.760 --rc genhtml_branch_coverage=1 00:10:14.760 --rc genhtml_function_coverage=1 00:10:14.760 --rc genhtml_legend=1 00:10:14.760 --rc geninfo_all_blocks=1 00:10:14.760 --rc geninfo_unexecuted_blocks=1 00:10:14.760 00:10:14.760 ' 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:14.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.760 --rc genhtml_branch_coverage=1 00:10:14.760 --rc genhtml_function_coverage=1 00:10:14.760 --rc genhtml_legend=1 00:10:14.760 --rc geninfo_all_blocks=1 00:10:14.760 --rc geninfo_unexecuted_blocks=1 00:10:14.760 00:10:14.760 ' 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:14.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.760 --rc genhtml_branch_coverage=1 00:10:14.760 --rc genhtml_function_coverage=1 00:10:14.760 --rc genhtml_legend=1 00:10:14.760 --rc geninfo_all_blocks=1 00:10:14.760 --rc geninfo_unexecuted_blocks=1 00:10:14.760 00:10:14.760 ' 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.760 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:14.761 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.916 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:22.917 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:22.917 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:22.917 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:22.917 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:22.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:10:22.917 00:10:22.917 --- 10.0.0.2 ping statistics --- 00:10:22.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.917 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:22.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:10:22.917 00:10:22.917 --- 10.0.0.1 ping statistics --- 00:10:22.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.917 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=168545 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 168545 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 168545 ']' 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.917 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:22.918 [2024-11-19 09:28:08.816600] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:22.918 [2024-11-19 09:28:08.816663] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.918 [2024-11-19 09:28:08.917574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.918 [2024-11-19 09:28:08.967965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.918 [2024-11-19 09:28:08.968012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.918 [2024-11-19 09:28:08.968020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.918 [2024-11-19 09:28:08.968027] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.918 [2024-11-19 09:28:08.968033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.918 [2024-11-19 09:28:08.968772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.918 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.918 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:22.918 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:22.918 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:22.918 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.180 [2024-11-19 09:28:09.678260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.180 Malloc0 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.180 [2024-11-19 09:28:09.739410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=168604 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 168604 /var/tmp/bdevperf.sock 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 168604 ']' 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.180 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:23.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:23.181 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.181 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.181 [2024-11-19 09:28:09.797088] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:23.181 [2024-11-19 09:28:09.797149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168604 ] 00:10:23.181 [2024-11-19 09:28:09.887941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.442 [2024-11-19 09:28:09.940755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.016 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.016 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:24.016 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:24.016 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.016 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:24.016 NVMe0n1 00:10:24.016 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.016 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:24.277 Running I/O for 10 seconds... 00:10:26.167 9223.00 IOPS, 36.03 MiB/s [2024-11-19T08:28:13.858Z] 10514.00 IOPS, 41.07 MiB/s [2024-11-19T08:28:15.244Z] 10912.67 IOPS, 42.63 MiB/s [2024-11-19T08:28:16.190Z] 11051.50 IOPS, 43.17 MiB/s [2024-11-19T08:28:17.133Z] 11430.40 IOPS, 44.65 MiB/s [2024-11-19T08:28:18.076Z] 11635.50 IOPS, 45.45 MiB/s [2024-11-19T08:28:19.018Z] 11876.14 IOPS, 46.39 MiB/s [2024-11-19T08:28:19.959Z] 12033.88 IOPS, 47.01 MiB/s [2024-11-19T08:28:20.902Z] 12173.00 IOPS, 47.55 MiB/s [2024-11-19T08:28:21.163Z] 12286.60 IOPS, 47.99 MiB/s 00:10:34.415 Latency(us) 00:10:34.415 [2024-11-19T08:28:21.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.415 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:34.415 Verification LBA range: start 0x0 length 0x4000 00:10:34.415 NVMe0n1 : 10.05 12323.83 48.14 0.00 0.00 82818.34 22609.92 81701.55 00:10:34.415 [2024-11-19T08:28:21.163Z] =================================================================================================================== 00:10:34.415 [2024-11-19T08:28:21.163Z] Total : 12323.83 48.14 0.00 0.00 82818.34 22609.92 81701.55 00:10:34.415 { 00:10:34.415 "results": [ 00:10:34.415 { 00:10:34.415 "job": "NVMe0n1", 00:10:34.415 "core_mask": "0x1", 00:10:34.415 "workload": "verify", 00:10:34.415 "status": "finished", 00:10:34.415 "verify_range": { 00:10:34.415 "start": 0, 00:10:34.415 "length": 16384 00:10:34.415 }, 00:10:34.415 "queue_depth": 1024, 00:10:34.415 "io_size": 4096, 00:10:34.415 "runtime": 10.052398, 00:10:34.415 "iops": 12323.825618524057, 00:10:34.415 "mibps": 48.1399438223596, 00:10:34.415 "io_failed": 0, 00:10:34.415 "io_timeout": 0, 00:10:34.415 "avg_latency_us": 82818.33872461335, 00:10:34.415 "min_latency_us": 22609.92, 00:10:34.415 "max_latency_us": 81701.54666666666 00:10:34.415 } 00:10:34.415 ], 00:10:34.415 "core_count": 1 00:10:34.415 } 00:10:34.415 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 168604 00:10:34.415 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 168604 ']' 00:10:34.415 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 168604 00:10:34.415 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:34.415 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.415 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 168604 00:10:34.415 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.415 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.415 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 168604' 00:10:34.415 killing process with pid 168604 00:10:34.415 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 168604 00:10:34.415 Received shutdown signal, test time was about 10.000000 seconds 00:10:34.415 00:10:34.415 Latency(us) 00:10:34.415 [2024-11-19T08:28:21.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.415 [2024-11-19T08:28:21.163Z] =================================================================================================================== 00:10:34.415 [2024-11-19T08:28:21.163Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:34.415 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 168604 00:10:34.415 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:34.415 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:34.415 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:34.415 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:34.415 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.415 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:34.415 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.415 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.415 rmmod nvme_tcp 00:10:34.415 rmmod nvme_fabrics 00:10:34.415 rmmod nvme_keyring 00:10:34.415 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.415 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:34.415 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:34.415 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 168545 ']' 00:10:34.415 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 168545 00:10:34.415 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 168545 ']' 00:10:34.415 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 168545 00:10:34.415 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 168545 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 168545' 00:10:34.676 killing process with pid 168545 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 168545 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 168545 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.676 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:37.223 00:10:37.223 real 0m22.362s 00:10:37.223 user 0m25.554s 00:10:37.223 sys 0m7.086s 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:37.223 ************************************ 00:10:37.223 END TEST nvmf_queue_depth 00:10:37.223 ************************************ 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.223 ************************************ 00:10:37.223 START TEST nvmf_target_multipath 00:10:37.223 ************************************ 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:37.223 * Looking for test storage... 00:10:37.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.223 --rc genhtml_branch_coverage=1 00:10:37.223 --rc genhtml_function_coverage=1 00:10:37.223 --rc genhtml_legend=1 00:10:37.223 --rc geninfo_all_blocks=1 00:10:37.223 --rc geninfo_unexecuted_blocks=1 00:10:37.223 00:10:37.223 ' 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.223 --rc genhtml_branch_coverage=1 00:10:37.223 --rc genhtml_function_coverage=1 00:10:37.223 --rc genhtml_legend=1 00:10:37.223 --rc geninfo_all_blocks=1 00:10:37.223 --rc geninfo_unexecuted_blocks=1 00:10:37.223 00:10:37.223 ' 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.223 --rc genhtml_branch_coverage=1 00:10:37.223 --rc genhtml_function_coverage=1 00:10:37.223 --rc genhtml_legend=1 00:10:37.223 --rc geninfo_all_blocks=1 00:10:37.223 --rc geninfo_unexecuted_blocks=1 00:10:37.223 00:10:37.223 ' 00:10:37.223 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.223 --rc genhtml_branch_coverage=1 00:10:37.223 --rc genhtml_function_coverage=1 00:10:37.224 --rc genhtml_legend=1 00:10:37.224 --rc geninfo_all_blocks=1 00:10:37.224 --rc geninfo_unexecuted_blocks=1 00:10:37.224 00:10:37.224 ' 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.224 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:45.373 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.373 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:45.374 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:45.374 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:45.374 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.374 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:45.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:10:45.374 00:10:45.374 --- 10.0.0.2 ping statistics --- 00:10:45.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.374 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:10:45.374 00:10:45.374 --- 10.0.0.1 ping statistics --- 00:10:45.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.374 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:45.374 only one NIC for nvmf test 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:45.374 rmmod nvme_tcp 00:10:45.374 rmmod nvme_fabrics 00:10:45.374 rmmod nvme_keyring 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.374 09:28:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:46.763 00:10:46.763 real 0m9.904s 00:10:46.763 user 0m2.172s 00:10:46.763 sys 0m5.688s 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:46.763 ************************************ 00:10:46.763 END TEST nvmf_target_multipath 00:10:46.763 ************************************ 00:10:46.763 09:28:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:46.764 09:28:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.764 09:28:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.764 09:28:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:46.764 ************************************ 00:10:46.764 START TEST nvmf_zcopy 00:10:46.764 ************************************ 00:10:46.764 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:47.026 * Looking for test storage... 00:10:47.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.026 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:47.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.027 --rc genhtml_branch_coverage=1 00:10:47.027 --rc genhtml_function_coverage=1 00:10:47.027 --rc genhtml_legend=1 00:10:47.027 --rc geninfo_all_blocks=1 00:10:47.027 --rc geninfo_unexecuted_blocks=1 00:10:47.027 00:10:47.027 ' 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:47.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.027 --rc genhtml_branch_coverage=1 00:10:47.027 --rc genhtml_function_coverage=1 00:10:47.027 --rc genhtml_legend=1 00:10:47.027 --rc geninfo_all_blocks=1 00:10:47.027 --rc geninfo_unexecuted_blocks=1 00:10:47.027 00:10:47.027 ' 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:47.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.027 --rc genhtml_branch_coverage=1 00:10:47.027 --rc genhtml_function_coverage=1 00:10:47.027 --rc genhtml_legend=1 00:10:47.027 --rc geninfo_all_blocks=1 00:10:47.027 --rc geninfo_unexecuted_blocks=1 00:10:47.027 00:10:47.027 ' 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:47.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.027 --rc genhtml_branch_coverage=1 00:10:47.027 --rc genhtml_function_coverage=1 00:10:47.027 --rc genhtml_legend=1 00:10:47.027 --rc geninfo_all_blocks=1 00:10:47.027 --rc geninfo_unexecuted_blocks=1 00:10:47.027 00:10:47.027 ' 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:47.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:47.027 09:28:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:55.179 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:55.179 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.179 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:55.180 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:55.180 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.180 09:28:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:55.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:10:55.180 00:10:55.180 --- 10.0.0.2 ping statistics --- 00:10:55.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.180 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:10:55.180 00:10:55.180 --- 10.0.0.1 ping statistics --- 00:10:55.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.180 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=179375 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 179375 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 179375 ']' 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.180 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.180 [2024-11-19 09:28:41.300477] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:55.180 [2024-11-19 09:28:41.300545] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.180 [2024-11-19 09:28:41.398425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.180 [2024-11-19 09:28:41.449483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.180 [2024-11-19 09:28:41.449530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.180 [2024-11-19 09:28:41.449538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.180 [2024-11-19 09:28:41.449549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.180 [2024-11-19 09:28:41.449556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.180 [2024-11-19 09:28:41.450307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.442 [2024-11-19 09:28:42.164412] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.442 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.704 [2024-11-19 09:28:42.188678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.704 malloc0 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:55.704 { 00:10:55.704 "params": { 00:10:55.704 "name": "Nvme$subsystem", 00:10:55.704 "trtype": "$TEST_TRANSPORT", 00:10:55.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:55.704 "adrfam": "ipv4", 00:10:55.704 "trsvcid": "$NVMF_PORT", 00:10:55.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:55.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:55.704 "hdgst": ${hdgst:-false}, 00:10:55.704 "ddgst": ${ddgst:-false} 00:10:55.704 }, 00:10:55.704 "method": "bdev_nvme_attach_controller" 00:10:55.704 } 00:10:55.704 EOF 00:10:55.704 )") 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:55.704 09:28:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:55.704 "params": { 00:10:55.704 "name": "Nvme1", 00:10:55.704 "trtype": "tcp", 00:10:55.704 "traddr": "10.0.0.2", 00:10:55.704 "adrfam": "ipv4", 00:10:55.704 "trsvcid": "4420", 00:10:55.704 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:55.704 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:55.704 "hdgst": false, 00:10:55.704 "ddgst": false 00:10:55.704 }, 00:10:55.704 "method": "bdev_nvme_attach_controller" 00:10:55.704 }' 00:10:55.704 [2024-11-19 09:28:42.290780] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:55.704 [2024-11-19 09:28:42.290851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179617 ] 00:10:55.704 [2024-11-19 09:28:42.383800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.704 [2024-11-19 09:28:42.437279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.965 Running I/O for 10 seconds... 00:10:58.298 6459.00 IOPS, 50.46 MiB/s [2024-11-19T08:28:45.989Z] 7387.00 IOPS, 57.71 MiB/s [2024-11-19T08:28:46.934Z] 8179.33 IOPS, 63.90 MiB/s [2024-11-19T08:28:47.877Z] 8575.25 IOPS, 66.99 MiB/s [2024-11-19T08:28:48.823Z] 8815.20 IOPS, 68.87 MiB/s [2024-11-19T08:28:49.782Z] 8971.50 IOPS, 70.09 MiB/s [2024-11-19T08:28:50.727Z] 9085.43 IOPS, 70.98 MiB/s [2024-11-19T08:28:51.669Z] 9163.25 IOPS, 71.59 MiB/s [2024-11-19T08:28:53.057Z] 9225.89 IOPS, 72.08 MiB/s [2024-11-19T08:28:53.058Z] 9279.80 IOPS, 72.50 MiB/s 00:11:06.310 Latency(us) 00:11:06.310 [2024-11-19T08:28:53.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:06.310 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:06.310 Verification LBA range: start 0x0 length 0x1000 00:11:06.310 Nvme1n1 : 10.01 9281.65 72.51 0.00 0.00 13743.84 785.07 27525.12 00:11:06.310 [2024-11-19T08:28:53.058Z] =================================================================================================================== 00:11:06.310 [2024-11-19T08:28:53.058Z] Total : 9281.65 72.51 0.00 0.00 13743.84 785.07 27525.12 00:11:06.310 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=181637 00:11:06.310 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:06.310 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:06.310 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:06.310 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:06.310 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:06.310 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:06.310 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:06.310 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:06.310 { 00:11:06.310 "params": { 00:11:06.310 "name": "Nvme$subsystem", 00:11:06.310 "trtype": "$TEST_TRANSPORT", 00:11:06.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:06.310 "adrfam": "ipv4", 00:11:06.310 "trsvcid": "$NVMF_PORT", 00:11:06.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:06.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:06.310 "hdgst": ${hdgst:-false}, 00:11:06.310 "ddgst": ${ddgst:-false} 00:11:06.310 }, 00:11:06.310 "method": "bdev_nvme_attach_controller" 00:11:06.310 } 00:11:06.310 EOF 00:11:06.310 )") 00:11:06.310 [2024-11-19 09:28:52.744876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.744907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:06.310 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:06.310 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:06.310 09:28:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:06.310 "params": { 00:11:06.310 "name": "Nvme1", 00:11:06.310 "trtype": "tcp", 00:11:06.310 "traddr": "10.0.0.2", 00:11:06.310 "adrfam": "ipv4", 00:11:06.310 "trsvcid": "4420", 00:11:06.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:06.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:06.310 "hdgst": false, 00:11:06.310 "ddgst": false 00:11:06.310 }, 00:11:06.310 "method": "bdev_nvme_attach_controller" 00:11:06.310 }' 00:11:06.310 [2024-11-19 09:28:52.756874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.756884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.768903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.768910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.780933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.780940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.788714] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:06.310 [2024-11-19 09:28:52.788761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181637 ] 00:11:06.310 [2024-11-19 09:28:52.792964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.792971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.804995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.805002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.817028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.817035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.829059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.829066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.841090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.841097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.853122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.853129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.865152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.865162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.871289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.310 [2024-11-19 09:28:52.877185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.877198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.889214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.889222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.900358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.310 [2024-11-19 09:28:52.901244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.901251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.913282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.913292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.925310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.925324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.937338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.937349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.949368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.949379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.961397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.961405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.973439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.973456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.985465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.985477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:52.997495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:52.997506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:53.009525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:53.009537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.310 [2024-11-19 09:28:53.021556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.310 [2024-11-19 09:28:53.021565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.069619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.069633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.077732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.077741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 Running I/O for 5 seconds... 00:11:06.572 [2024-11-19 09:28:53.093233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.093249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.106415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.106431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.119917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.119932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.132640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.132655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.145764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.145779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.159610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.159625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.172210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.172225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.184833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.184848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.197631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.197646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.210680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.210695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.224014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.224029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.237338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.237352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.250254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.250270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.262803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.262818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.275867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.275882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.289358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.289372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.572 [2024-11-19 09:28:53.302488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.572 [2024-11-19 09:28:53.302503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.316283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.316299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.329039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.329054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.342408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.342422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.354927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.354941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.367611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.367625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.380902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.380917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.394165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.394180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.406995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.407010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.420246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.420261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.433493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.433508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.446336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.446351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.459426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.459440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.472833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.472848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.485898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.485912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.499485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.499500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.512070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.512084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.525268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.525283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.538300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.538314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.551146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.551166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.564328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.833 [2024-11-19 09:28:53.564342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.833 [2024-11-19 09:28:53.577632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:06.834 [2024-11-19 09:28:53.577646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.591093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.591109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.603839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.603854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.616696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.616710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.629292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.629310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.642373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.642387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.655041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.655055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.668817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.668832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.682413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.682427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.694947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.694961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.708232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.708246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.721576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.721590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.734072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.734086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.747002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.747016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.760478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.760493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.773608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.773622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.786641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.786655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.799796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.799811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.813308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.813323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.094 [2024-11-19 09:28:53.826529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.094 [2024-11-19 09:28:53.826543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:53.839982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:53.839996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:53.853268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:53.853282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:53.866644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:53.866659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:53.880020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:53.880041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:53.893369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:53.893383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:53.905978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:53.905992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:53.919332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:53.919346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:53.932638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:53.932653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:53.946034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:53.946049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:53.959494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:53.959508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:53.972597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:53.972612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:53.985448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:53.985462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:53.998885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:53.998899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:54.011584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:54.011598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:54.025045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:54.025059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:54.038395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:54.038409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:54.051793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:54.051807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:54.065361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:54.065376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 [2024-11-19 09:28:54.077736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:54.077750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.356 19223.00 IOPS, 150.18 MiB/s [2024-11-19T08:28:54.104Z] [2024-11-19 09:28:54.090226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.356 [2024-11-19 09:28:54.090240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.616 [2024-11-19 09:28:54.103407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.616 [2024-11-19 09:28:54.103422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.616 [2024-11-19 09:28:54.116814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.616 [2024-11-19 09:28:54.116828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.616 [2024-11-19 09:28:54.130309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.616 [2024-11-19 09:28:54.130327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.616 [2024-11-19 09:28:54.143761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.617 [2024-11-19 09:28:54.143775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.617 [2024-11-19 09:28:54.156511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.617 [2024-11-19 09:28:54.156525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.617 [2024-11-19 09:28:54.169629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.617 [2024-11-19 09:28:54.169643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.617 [2024-11-19 09:28:54.182402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.617 [2024-11-19 09:28:54.182416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.617 [2024-11-19 09:28:54.195527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.617 [2024-11-19 09:28:54.195542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.617 [2024-11-19 09:28:54.208803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.617 [2024-11-19 09:28:54.208817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.617 [2024-11-19 09:28:54.222575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.617 [2024-11-19 09:28:54.222590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.617 [2024-11-19 09:28:54.236052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.617 [2024-11-19 09:28:54.236067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.617 [2024-11-19 09:28:54.249473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.617 [2024-11-19 09:28:54.249488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.617 [2024-11-19 09:28:54.261913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.617 [2024-11-19 09:28:54.261927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.617 [2024-11-19 09:28:54.275528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.617 [2024-11-19 09:28:54.275542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.617 [2024-11-19 09:28:54.288394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.617 [2024-11-19 09:28:54.288408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.617 [2024-11-19 09:28:54.301774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.617 [2024-11-19 09:28:54.301788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.617 [2024-11-19 09:28:54.314337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.617 [2024-11-19 09:28:54.314351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.617 [2024-11-19 09:28:54.327169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.617 [2024-11-19 09:28:54.327183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.617 [2024-11-19 09:28:54.339518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.617 [2024-11-19 09:28:54.339532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.617 [2024-11-19 09:28:54.351972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.617 [2024-11-19 09:28:54.351986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.364434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.364448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.377689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.377703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.391292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.391306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.404227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.404241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.417555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.417569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.431176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.431190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.443983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.443997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.456434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.456448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.469812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.469826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.483139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.483153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.496571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.496585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.510043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.510057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.523422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.523437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.535952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.535966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.548812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.548826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.562085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.562099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.575427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.575441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.588848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.588862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.602079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.877 [2024-11-19 09:28:54.602093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.877 [2024-11-19 09:28:54.615147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.878 [2024-11-19 09:28:54.615166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.627690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.627705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.640107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.640121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.653560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.653574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.667204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.667219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.680374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.680389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.693165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.693180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.705751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.705766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.719078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.719092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.731791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.731806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.744358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.744373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.757622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.757636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.770516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.770531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.783878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.783892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.797235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.797250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.809942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.809957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.823427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.823442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.836886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.836900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.850190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.850205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.863550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.863565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.138 [2024-11-19 09:28:54.876449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.138 [2024-11-19 09:28:54.876464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.399 [2024-11-19 09:28:54.889681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.399 [2024-11-19 09:28:54.889696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.399 [2024-11-19 09:28:54.902759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.399 [2024-11-19 09:28:54.902774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.399 [2024-11-19 09:28:54.915419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.399 [2024-11-19 09:28:54.915432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.399 [2024-11-19 09:28:54.927994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.399 [2024-11-19 09:28:54.928008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.399 [2024-11-19 09:28:54.941225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.399 [2024-11-19 09:28:54.941239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.399 [2024-11-19 09:28:54.954904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.399 [2024-11-19 09:28:54.954919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.399 [2024-11-19 09:28:54.968637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.399 [2024-11-19 09:28:54.968651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.399 [2024-11-19 09:28:54.981577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.399 [2024-11-19 09:28:54.981592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.400 [2024-11-19 09:28:54.994590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.400 [2024-11-19 09:28:54.994604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.400 [2024-11-19 09:28:55.007153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.400 [2024-11-19 09:28:55.007174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.400 [2024-11-19 09:28:55.020777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.400 [2024-11-19 09:28:55.020791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.400 [2024-11-19 09:28:55.033246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.400 [2024-11-19 09:28:55.033261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.400 [2024-11-19 09:28:55.045656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.400 [2024-11-19 09:28:55.045671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.400 [2024-11-19 09:28:55.058534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.400 [2024-11-19 09:28:55.058549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.400 [2024-11-19 09:28:55.071702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.400 [2024-11-19 09:28:55.071718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.400 [2024-11-19 09:28:55.084847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.400 [2024-11-19 09:28:55.084862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.400 19252.00 IOPS, 150.41 MiB/s [2024-11-19T08:28:55.148Z] [2024-11-19 09:28:55.097101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.400 [2024-11-19 09:28:55.097115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.400 [2024-11-19 09:28:55.109600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.400 [2024-11-19 09:28:55.109619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.400 [2024-11-19 09:28:55.122505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.400 [2024-11-19 09:28:55.122520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.400 [2024-11-19 09:28:55.135286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.400 [2024-11-19 09:28:55.135300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.148779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.148793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.161970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.161984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.174743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.174758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.187967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.187982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.200991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.201005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.214003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.214018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.227458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.227473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.240033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.240048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.252906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.252920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.266125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.266140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.279729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.279744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.292960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.292975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.305884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.305899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.318608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.318622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.332087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.332102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.345345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.345359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.358400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.358419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.370927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.370941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.384012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.384026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.661 [2024-11-19 09:28:55.397508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.661 [2024-11-19 09:28:55.397522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.923 [2024-11-19 09:28:55.409880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.923 [2024-11-19 09:28:55.409894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.923 [2024-11-19 09:28:55.423514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.923 [2024-11-19 09:28:55.423528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.923 [2024-11-19 09:28:55.436946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.923 [2024-11-19 09:28:55.436961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.923 [2024-11-19 09:28:55.449515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.923 [2024-11-19 09:28:55.449528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.923 [2024-11-19 09:28:55.462339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.923 [2024-11-19 09:28:55.462353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.923 [2024-11-19 09:28:55.475893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.923 [2024-11-19 09:28:55.475907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.923 [2024-11-19 09:28:55.489456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.923 [2024-11-19 09:28:55.489470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.923 [2024-11-19 09:28:55.503000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.923 [2024-11-19 09:28:55.503015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.923 [2024-11-19 09:28:55.516696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.923 [2024-11-19 09:28:55.516710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.923 [2024-11-19 09:28:55.530286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.923 [2024-11-19 09:28:55.530300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.923 [2024-11-19 09:28:55.543248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.923 [2024-11-19 09:28:55.543263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.923 [2024-11-19 09:28:55.555753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.923 [2024-11-19 09:28:55.555767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.923 [2024-11-19 09:28:55.568779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.923 [2024-11-19 09:28:55.568793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.923 [2024-11-19 09:28:55.581271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.923 [2024-11-19 09:28:55.581285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.923 [2024-11-19 09:28:55.594118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.923 [2024-11-19 09:28:55.594132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.923 [2024-11-19 09:28:55.606944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.923 [2024-11-19 09:28:55.606962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.923 [2024-11-19 09:28:55.620211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.924 [2024-11-19 09:28:55.620225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.924 [2024-11-19 09:28:55.633543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.924 [2024-11-19 09:28:55.633557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.924 [2024-11-19 09:28:55.647155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.924 [2024-11-19 09:28:55.647173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.924 [2024-11-19 09:28:55.660421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.924 [2024-11-19 09:28:55.660435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.185 [2024-11-19 09:28:55.673602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.185 [2024-11-19 09:28:55.673617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.185 [2024-11-19 09:28:55.686520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.185 [2024-11-19 09:28:55.686534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.185 [2024-11-19 09:28:55.699793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.185 [2024-11-19 09:28:55.699807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.185 [2024-11-19 09:28:55.713290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.185 [2024-11-19 09:28:55.713304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.185 [2024-11-19 09:28:55.726777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.185 [2024-11-19 09:28:55.726791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.185 [2024-11-19 09:28:55.740304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.185 [2024-11-19 09:28:55.740318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.185 [2024-11-19 09:28:55.753321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.185 [2024-11-19 09:28:55.753336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.185 [2024-11-19 09:28:55.767245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.185 [2024-11-19 09:28:55.767260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.185 [2024-11-19 09:28:55.779972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.185 [2024-11-19 09:28:55.779986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.185 [2024-11-19 09:28:55.792666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.185 [2024-11-19 09:28:55.792680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.185 [2024-11-19 09:28:55.804927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.185 [2024-11-19 09:28:55.804942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.185 [2024-11-19 09:28:55.818013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.185 [2024-11-19 09:28:55.818029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.185 [2024-11-19 09:28:55.831089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.185 [2024-11-19 09:28:55.831103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.186 [2024-11-19 09:28:55.844414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.186 [2024-11-19 09:28:55.844427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.186 [2024-11-19 09:28:55.857694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.186 [2024-11-19 09:28:55.857712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.186 [2024-11-19 09:28:55.870791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.186 [2024-11-19 09:28:55.870805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.186 [2024-11-19 09:28:55.884105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.186 [2024-11-19 09:28:55.884119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.186 [2024-11-19 09:28:55.897494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.186 [2024-11-19 09:28:55.897508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.186 [2024-11-19 09:28:55.911150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.186 [2024-11-19 09:28:55.911169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.186 [2024-11-19 09:28:55.924418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.186 [2024-11-19 09:28:55.924432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:55.937755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:55.937769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:55.950588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:55.950602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:55.963046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:55.963060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:55.976606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:55.976620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:55.989360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:55.989374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:56.002386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:56.002400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:56.015666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:56.015680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:56.028984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:56.028998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:56.042093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:56.042108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:56.055828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:56.055843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:56.068524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:56.068539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:56.081439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:56.081453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 19294.33 IOPS, 150.74 MiB/s [2024-11-19T08:28:56.196Z] [2024-11-19 09:28:56.093902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:56.093917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:56.106841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:56.106855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:56.120182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:56.120197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:56.132622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:56.132636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:56.145572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:56.145586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:56.159097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:56.159111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:56.173004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:56.173018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.448 [2024-11-19 09:28:56.185118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.448 [2024-11-19 09:28:56.185132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.711 [2024-11-19 09:28:56.198137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.711 [2024-11-19 09:28:56.198151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.711 [2024-11-19 09:28:56.210806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.711 [2024-11-19 09:28:56.210820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.711 [2024-11-19 09:28:56.224662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.711 [2024-11-19 09:28:56.224676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.711 [2024-11-19 09:28:56.237603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.711 [2024-11-19 09:28:56.237618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.711 [2024-11-19 09:28:56.250792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.711 [2024-11-19 09:28:56.250807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.711 [2024-11-19 09:28:56.264341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.711 [2024-11-19 09:28:56.264355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.711 [2024-11-19 09:28:56.277247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.711 [2024-11-19 09:28:56.277261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.711 [2024-11-19 09:28:56.290790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.711 [2024-11-19 09:28:56.290805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.711 [2024-11-19 09:28:56.303206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.711 [2024-11-19 09:28:56.303220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.711 [2024-11-19 09:28:56.316348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.712 [2024-11-19 09:28:56.316362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.712 [2024-11-19 09:28:56.329996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.712 [2024-11-19 09:28:56.330010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.712 [2024-11-19 09:28:56.343251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.712 [2024-11-19 09:28:56.343267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.712 [2024-11-19 09:28:56.355625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.712 [2024-11-19 09:28:56.355640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.712 [2024-11-19 09:28:56.368165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.712 [2024-11-19 09:28:56.368179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.712 [2024-11-19 09:28:56.381447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.712 [2024-11-19 09:28:56.381462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.712 [2024-11-19 09:28:56.393989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.712 [2024-11-19 09:28:56.394003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.712 [2024-11-19 09:28:56.406715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.712 [2024-11-19 09:28:56.406730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.712 [2024-11-19 09:28:56.420515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.712 [2024-11-19 09:28:56.420530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.712 [2024-11-19 09:28:56.433037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.712 [2024-11-19 09:28:56.433051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.712 [2024-11-19 09:28:56.446296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.712 [2024-11-19 09:28:56.446311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.973 [2024-11-19 09:28:56.459001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.973 [2024-11-19 09:28:56.459015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.973 [2024-11-19 09:28:56.471582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.973 [2024-11-19 09:28:56.471596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.973 [2024-11-19 09:28:56.484370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.484385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.974 [2024-11-19 09:28:56.497810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.497825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.974 [2024-11-19 09:28:56.510323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.510338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.974 [2024-11-19 09:28:56.522731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.522746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.974 [2024-11-19 09:28:56.536093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.536108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.974 [2024-11-19 09:28:56.549018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.549033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.974 [2024-11-19 09:28:56.562331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.562345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.974 [2024-11-19 09:28:56.575198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.575213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.974 [2024-11-19 09:28:56.588432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.588450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.974 [2024-11-19 09:28:56.601474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.601488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.974 [2024-11-19 09:28:56.614716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.614731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.974 [2024-11-19 09:28:56.627392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.627406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.974 [2024-11-19 09:28:56.640647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.640662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.974 [2024-11-19 09:28:56.653387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.653402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.974 [2024-11-19 09:28:56.666577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.666591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.974 [2024-11-19 09:28:56.680166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.680180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.974 [2024-11-19 09:28:56.693522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.693536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.974 [2024-11-19 09:28:56.706635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.974 [2024-11-19 09:28:56.706649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.720411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.720426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.733712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.733727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.746434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.746449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.758758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.758773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.771663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.771678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.784592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.784606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.797156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.797174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.809639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.809653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.822872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.822887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.835614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.835633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.849305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.849320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.862126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.862141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.874758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.874773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.888191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.888206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.901396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.901411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.913964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.913979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.926426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.926440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.939674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.939688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.953262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.953276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.966625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.966639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.236 [2024-11-19 09:28:56.978794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.236 [2024-11-19 09:28:56.978809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:56.991503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:56.991518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.003676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.003691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.016413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.016428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.029574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.029589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.042207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.042221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.055653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.055668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.068917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.068931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.082514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.082533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 19302.25 IOPS, 150.80 MiB/s [2024-11-19T08:28:57.246Z] [2024-11-19 09:28:57.095890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.095904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.109134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.109149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.122154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.122172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.135486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.135501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.148816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.148830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.161877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.161891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.175642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.175656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.188056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.188070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.200340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.200354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.213026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.213040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.225382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.225398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.498 [2024-11-19 09:28:57.238806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.498 [2024-11-19 09:28:57.238820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.760 [2024-11-19 09:28:57.252030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.760 [2024-11-19 09:28:57.252045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.760 [2024-11-19 09:28:57.265503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.760 [2024-11-19 09:28:57.265517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.760 [2024-11-19 09:28:57.277936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.760 [2024-11-19 09:28:57.277950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.760 [2024-11-19 09:28:57.290859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.760 [2024-11-19 09:28:57.290873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.760 [2024-11-19 09:28:57.303948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.760 [2024-11-19 09:28:57.303962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.760 [2024-11-19 09:28:57.316807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.760 [2024-11-19 09:28:57.316821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.760 [2024-11-19 09:28:57.329365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.760 [2024-11-19 09:28:57.329379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.760 [2024-11-19 09:28:57.342071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.760 [2024-11-19 09:28:57.342085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.760 [2024-11-19 09:28:57.354536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.760 [2024-11-19 09:28:57.354550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.760 [2024-11-19 09:28:57.368155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.760 [2024-11-19 09:28:57.368172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.760 [2024-11-19 09:28:57.381615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.760 [2024-11-19 09:28:57.381629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.760 [2024-11-19 09:28:57.393831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.760 [2024-11-19 09:28:57.393846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.760 [2024-11-19 09:28:57.406694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.760 [2024-11-19 09:28:57.406708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.760 [2024-11-19 09:28:57.420013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.760 [2024-11-19 09:28:57.420028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.760 [2024-11-19 09:28:57.433530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.761 [2024-11-19 09:28:57.433544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.761 [2024-11-19 09:28:57.446070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.761 [2024-11-19 09:28:57.446084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.761 [2024-11-19 09:28:57.458564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.761 [2024-11-19 09:28:57.458578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.761 [2024-11-19 09:28:57.472155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.761 [2024-11-19 09:28:57.472172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.761 [2024-11-19 09:28:57.485491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.761 [2024-11-19 09:28:57.485505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.761 [2024-11-19 09:28:57.499055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.761 [2024-11-19 09:28:57.499069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.512441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.022 [2024-11-19 09:28:57.512455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.525823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.022 [2024-11-19 09:28:57.525838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.538525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.022 [2024-11-19 09:28:57.538539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.551461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.022 [2024-11-19 09:28:57.551475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.564575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.022 [2024-11-19 09:28:57.564589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.577330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.022 [2024-11-19 09:28:57.577344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.590531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.022 [2024-11-19 09:28:57.590545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.603792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.022 [2024-11-19 09:28:57.603807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.617261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.022 [2024-11-19 09:28:57.617275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.630721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.022 [2024-11-19 09:28:57.630735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.643715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.022 [2024-11-19 09:28:57.643729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.657044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.022 [2024-11-19 09:28:57.657058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.669877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.022 [2024-11-19 09:28:57.669891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.682926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.022 [2024-11-19 09:28:57.682940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.695920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.022 [2024-11-19 09:28:57.695935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.709339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.022 [2024-11-19 09:28:57.709354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.722943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.022 [2024-11-19 09:28:57.722957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.022 [2024-11-19 09:28:57.735991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.023 [2024-11-19 09:28:57.736005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.023 [2024-11-19 09:28:57.749427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.023 [2024-11-19 09:28:57.749442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.023 [2024-11-19 09:28:57.762464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.023 [2024-11-19 09:28:57.762478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.775841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.775855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.788300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.788314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.800830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.800844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.813169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.813184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.826446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.826460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.839460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.839473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.852878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.852892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.866507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.866521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.879860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.879875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.893220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.893234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.906264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.906279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.918634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.918648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.931409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.931423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.944540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.944553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.957491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.957505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.970598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.970611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.983074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.983087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:57.996263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:57.996277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:58.008574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:58.008588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.284 [2024-11-19 09:28:58.021881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.284 [2024-11-19 09:28:58.021896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.546 [2024-11-19 09:28:58.035167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.546 [2024-11-19 09:28:58.035182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.546 [2024-11-19 09:28:58.048478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.546 [2024-11-19 09:28:58.048493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.547 [2024-11-19 09:28:58.061787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.547 [2024-11-19 09:28:58.061801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.547 [2024-11-19 09:28:58.075414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.547 [2024-11-19 09:28:58.075429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.547 [2024-11-19 09:28:58.087740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.547 [2024-11-19 09:28:58.087755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.547 19299.40 IOPS, 150.78 MiB/s [2024-11-19T08:28:58.295Z] [2024-11-19 09:28:58.099600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.547 [2024-11-19 09:28:58.099615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.547 00:11:11.547 Latency(us) 00:11:11.547 [2024-11-19T08:28:58.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.547 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:11.547 Nvme1n1 : 5.01 19301.22 150.79 0.00 0.00 6625.69 3017.39 14527.15 00:11:11.547 [2024-11-19T08:28:58.295Z] =================================================================================================================== 00:11:11.547 [2024-11-19T08:28:58.295Z] Total : 19301.22 150.79 0.00 0.00 6625.69 3017.39 14527.15 00:11:11.547 [2024-11-19 09:28:58.109609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.547 [2024-11-19 09:28:58.109624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.547 [2024-11-19 09:28:58.121631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.547 [2024-11-19 09:28:58.121646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.547 [2024-11-19 09:28:58.133650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.547 [2024-11-19 09:28:58.133661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.547 [2024-11-19 09:28:58.145683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.547 [2024-11-19 09:28:58.145696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.547 [2024-11-19 09:28:58.157711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.547 [2024-11-19 09:28:58.157722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.547 [2024-11-19 09:28:58.169739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.547 [2024-11-19 09:28:58.169748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.547 [2024-11-19 09:28:58.181770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.547 [2024-11-19 09:28:58.181781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.547 [2024-11-19 09:28:58.193801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.547 [2024-11-19 09:28:58.193810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (181637) - No such process 00:11:11.547 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 181637 00:11:11.547 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.547 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.547 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.547 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.547 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:11.547 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.547 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.547 delay0 00:11:11.547 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.547 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:11.547 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.547 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.547 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.547 09:28:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:11.809 [2024-11-19 09:28:58.369364] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:18.403 Initializing NVMe Controllers 00:11:18.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:18.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:18.403 Initialization complete. Launching workers. 00:11:18.403 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 132 00:11:18.403 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 412, failed to submit 40 00:11:18.403 success 225, unsuccessful 187, failed 0 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.403 rmmod nvme_tcp 00:11:18.403 rmmod nvme_fabrics 00:11:18.403 rmmod nvme_keyring 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 179375 ']' 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 179375 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 179375 ']' 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 179375 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 179375 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 179375' 00:11:18.403 killing process with pid 179375 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 179375 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 179375 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.403 09:29:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.319 09:29:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:20.319 00:11:20.319 real 0m33.326s 00:11:20.319 user 0m44.937s 00:11:20.319 sys 0m10.078s 00:11:20.319 09:29:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.319 09:29:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:20.319 ************************************ 00:11:20.319 END TEST nvmf_zcopy 00:11:20.319 ************************************ 00:11:20.319 09:29:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:20.319 09:29:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:20.319 09:29:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.319 09:29:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:20.319 ************************************ 00:11:20.319 START TEST nvmf_nmic 00:11:20.319 ************************************ 00:11:20.319 09:29:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:20.319 * Looking for test storage... 00:11:20.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.319 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:20.319 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:20.319 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.582 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:20.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.583 --rc genhtml_branch_coverage=1 00:11:20.583 --rc genhtml_function_coverage=1 00:11:20.583 --rc genhtml_legend=1 00:11:20.583 --rc geninfo_all_blocks=1 00:11:20.583 --rc geninfo_unexecuted_blocks=1 00:11:20.583 00:11:20.583 ' 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:20.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.583 --rc genhtml_branch_coverage=1 00:11:20.583 --rc genhtml_function_coverage=1 00:11:20.583 --rc genhtml_legend=1 00:11:20.583 --rc geninfo_all_blocks=1 00:11:20.583 --rc geninfo_unexecuted_blocks=1 00:11:20.583 00:11:20.583 ' 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:20.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.583 --rc genhtml_branch_coverage=1 00:11:20.583 --rc genhtml_function_coverage=1 00:11:20.583 --rc genhtml_legend=1 00:11:20.583 --rc geninfo_all_blocks=1 00:11:20.583 --rc geninfo_unexecuted_blocks=1 00:11:20.583 00:11:20.583 ' 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:20.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.583 --rc genhtml_branch_coverage=1 00:11:20.583 --rc genhtml_function_coverage=1 00:11:20.583 --rc genhtml_legend=1 00:11:20.583 --rc geninfo_all_blocks=1 00:11:20.583 --rc geninfo_unexecuted_blocks=1 00:11:20.583 00:11:20.583 ' 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:20.583 09:29:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.736 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:28.737 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:28.737 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:28.737 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:28.737 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:28.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:11:28.737 00:11:28.737 --- 10.0.0.2 ping statistics --- 00:11:28.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.737 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:11:28.737 00:11:28.737 --- 10.0.0.1 ping statistics --- 00:11:28.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.737 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=188315 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 188315 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 188315 ']' 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.737 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.738 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.738 09:29:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:28.738 [2024-11-19 09:29:14.646097] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:28.738 [2024-11-19 09:29:14.646168] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.738 [2024-11-19 09:29:14.743730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.738 [2024-11-19 09:29:14.797800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.738 [2024-11-19 09:29:14.797852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.738 [2024-11-19 09:29:14.797860] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.738 [2024-11-19 09:29:14.797867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.738 [2024-11-19 09:29:14.797874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.738 [2024-11-19 09:29:14.799901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.738 [2024-11-19 09:29:14.800031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.738 [2024-11-19 09:29:14.800220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.738 [2024-11-19 09:29:14.800257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.738 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.738 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:28.738 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:28.738 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:28.738 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.000 [2024-11-19 09:29:15.523916] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.000 Malloc0 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.000 [2024-11-19 09:29:15.606402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:29.000 test case1: single bdev can't be used in multiple subsystems 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.000 [2024-11-19 09:29:15.642227] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:29.000 [2024-11-19 09:29:15.642254] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:29.000 [2024-11-19 09:29:15.642263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.000 request: 00:11:29.000 { 00:11:29.000 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:29.000 "namespace": { 00:11:29.000 "bdev_name": "Malloc0", 00:11:29.000 "no_auto_visible": false 00:11:29.000 }, 00:11:29.000 "method": "nvmf_subsystem_add_ns", 00:11:29.000 "req_id": 1 00:11:29.000 } 00:11:29.000 Got JSON-RPC error response 00:11:29.000 response: 00:11:29.000 { 00:11:29.000 "code": -32602, 00:11:29.000 "message": "Invalid parameters" 00:11:29.000 } 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:29.000 Adding namespace failed - expected result. 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:29.000 test case2: host connect to nvmf target in multiple paths 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.000 [2024-11-19 09:29:15.654429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.000 09:29:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.920 09:29:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:32.307 09:29:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:32.307 09:29:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:32.307 09:29:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:32.307 09:29:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:32.307 09:29:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:34.221 09:29:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:34.221 09:29:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:34.221 09:29:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:34.221 09:29:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:34.221 09:29:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:34.221 09:29:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:34.221 09:29:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:34.221 [global] 00:11:34.221 thread=1 00:11:34.221 invalidate=1 00:11:34.221 rw=write 00:11:34.221 time_based=1 00:11:34.221 runtime=1 00:11:34.221 ioengine=libaio 00:11:34.221 direct=1 00:11:34.221 bs=4096 00:11:34.221 iodepth=1 00:11:34.221 norandommap=0 00:11:34.221 numjobs=1 00:11:34.221 00:11:34.221 verify_dump=1 00:11:34.221 verify_backlog=512 00:11:34.221 verify_state_save=0 00:11:34.221 do_verify=1 00:11:34.221 verify=crc32c-intel 00:11:34.221 [job0] 00:11:34.221 filename=/dev/nvme0n1 00:11:34.221 Could not set queue depth (nvme0n1) 00:11:34.792 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:34.792 fio-3.35 00:11:34.792 Starting 1 thread 00:11:36.177 00:11:36.177 job0: (groupid=0, jobs=1): err= 0: pid=189857: Tue Nov 19 09:29:22 2024 00:11:36.177 read: IOPS=17, BW=69.6KiB/s (71.3kB/s)(72.0KiB/1034msec) 00:11:36.177 slat (nsec): min=24187, max=24704, avg=24366.11, stdev=153.29 00:11:36.177 clat (usec): min=1024, max=42068, avg=39682.07, stdev=9647.83 00:11:36.177 lat (usec): min=1049, max=42092, avg=39706.43, stdev=9647.83 00:11:36.177 clat percentiles (usec): 00:11:36.177 | 1.00th=[ 1029], 5.00th=[ 1029], 10.00th=[41681], 20.00th=[41681], 00:11:36.177 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:36.177 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:36.177 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:36.177 | 99.99th=[42206] 00:11:36.177 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:11:36.177 slat (nsec): min=9374, max=62700, avg=27845.69, stdev=8921.51 00:11:36.177 clat (usec): min=239, max=832, avg=589.67, stdev=96.30 00:11:36.177 lat (usec): min=249, max=865, avg=617.52, stdev=99.85 00:11:36.177 clat percentiles (usec): 00:11:36.177 | 1.00th=[ 338], 5.00th=[ 412], 10.00th=[ 465], 20.00th=[ 506], 00:11:36.177 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 619], 00:11:36.177 | 70.00th=[ 652], 80.00th=[ 676], 90.00th=[ 701], 95.00th=[ 734], 00:11:36.177 | 99.00th=[ 775], 99.50th=[ 791], 99.90th=[ 832], 99.95th=[ 832], 00:11:36.177 | 99.99th=[ 832] 00:11:36.177 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:36.177 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:36.177 lat (usec) : 250=0.19%, 500=16.23%, 750=77.17%, 1000=3.02% 00:11:36.177 lat (msec) : 2=0.19%, 50=3.21% 00:11:36.177 cpu : usr=0.48%, sys=1.55%, ctx=530, majf=0, minf=1 00:11:36.178 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.178 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.178 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.178 00:11:36.178 Run status group 0 (all jobs): 00:11:36.178 READ: bw=69.6KiB/s (71.3kB/s), 69.6KiB/s-69.6KiB/s (71.3kB/s-71.3kB/s), io=72.0KiB (73.7kB), run=1034-1034msec 00:11:36.178 WRITE: bw=1981KiB/s (2028kB/s), 1981KiB/s-1981KiB/s (2028kB/s-2028kB/s), io=2048KiB (2097kB), run=1034-1034msec 00:11:36.178 00:11:36.178 Disk stats (read/write): 00:11:36.178 nvme0n1: ios=64/512, merge=0/0, ticks=590/292, in_queue=882, util=92.99% 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:36.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:36.178 rmmod nvme_tcp 00:11:36.178 rmmod nvme_fabrics 00:11:36.178 rmmod nvme_keyring 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 188315 ']' 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 188315 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 188315 ']' 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 188315 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 188315 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 188315' 00:11:36.178 killing process with pid 188315 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 188315 00:11:36.178 09:29:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 188315 00:11:36.439 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:36.439 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:36.439 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:36.439 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:36.439 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:36.439 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:36.439 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:36.439 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:36.439 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:36.439 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.439 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.439 09:29:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.352 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:38.352 00:11:38.352 real 0m18.178s 00:11:38.352 user 0m49.483s 00:11:38.352 sys 0m6.788s 00:11:38.352 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.352 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:38.352 ************************************ 00:11:38.352 END TEST nvmf_nmic 00:11:38.352 ************************************ 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:38.614 ************************************ 00:11:38.614 START TEST nvmf_fio_target 00:11:38.614 ************************************ 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:38.614 * Looking for test storage... 00:11:38.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:38.614 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.877 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:38.877 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:38.877 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.877 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:38.877 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.877 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.877 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.877 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:38.877 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.877 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:38.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.877 --rc genhtml_branch_coverage=1 00:11:38.877 --rc genhtml_function_coverage=1 00:11:38.877 --rc genhtml_legend=1 00:11:38.877 --rc geninfo_all_blocks=1 00:11:38.877 --rc geninfo_unexecuted_blocks=1 00:11:38.877 00:11:38.877 ' 00:11:38.877 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:38.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.877 --rc genhtml_branch_coverage=1 00:11:38.877 --rc genhtml_function_coverage=1 00:11:38.877 --rc genhtml_legend=1 00:11:38.877 --rc geninfo_all_blocks=1 00:11:38.877 --rc geninfo_unexecuted_blocks=1 00:11:38.877 00:11:38.877 ' 00:11:38.877 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:38.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.877 --rc genhtml_branch_coverage=1 00:11:38.877 --rc genhtml_function_coverage=1 00:11:38.877 --rc genhtml_legend=1 00:11:38.877 --rc geninfo_all_blocks=1 00:11:38.877 --rc geninfo_unexecuted_blocks=1 00:11:38.877 00:11:38.877 ' 00:11:38.877 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:38.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.877 --rc genhtml_branch_coverage=1 00:11:38.877 --rc genhtml_function_coverage=1 00:11:38.877 --rc genhtml_legend=1 00:11:38.877 --rc geninfo_all_blocks=1 00:11:38.877 --rc geninfo_unexecuted_blocks=1 00:11:38.877 00:11:38.877 ' 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:38.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:38.878 09:29:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.025 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.025 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:47.025 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:47.025 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:47.025 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:47.025 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:47.026 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:47.026 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:47.026 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:47.026 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:47.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:11:47.026 00:11:47.026 --- 10.0.0.2 ping statistics --- 00:11:47.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.026 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:11:47.026 00:11:47.026 --- 10.0.0.1 ping statistics --- 00:11:47.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.026 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.026 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:47.027 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:47.027 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.027 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:47.027 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:47.027 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:47.027 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:47.027 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:47.027 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.027 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=194216 00:11:47.027 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 194216 00:11:47.027 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.027 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 194216 ']' 00:11:47.027 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.027 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.027 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.027 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.027 09:29:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.027 [2024-11-19 09:29:32.914332] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:47.027 [2024-11-19 09:29:32.914401] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.027 [2024-11-19 09:29:33.015227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.027 [2024-11-19 09:29:33.069827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.027 [2024-11-19 09:29:33.069876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.027 [2024-11-19 09:29:33.069885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.027 [2024-11-19 09:29:33.069892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.027 [2024-11-19 09:29:33.069899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.027 [2024-11-19 09:29:33.072229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.027 [2024-11-19 09:29:33.072298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.027 [2024-11-19 09:29:33.072461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.027 [2024-11-19 09:29:33.072464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.027 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.027 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:47.027 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:47.027 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:47.027 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.288 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.288 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:47.288 [2024-11-19 09:29:33.950196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.288 09:29:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:47.549 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:47.549 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:47.810 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:47.810 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:48.071 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:48.071 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:48.333 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:48.333 09:29:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:48.333 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:48.594 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:48.594 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:48.856 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:48.856 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:49.117 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:49.117 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:49.378 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:49.378 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:49.378 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:49.638 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:49.638 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.899 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.900 [2024-11-19 09:29:36.565794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.900 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:50.160 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:50.421 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.807 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:51.807 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:51.807 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.807 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:51.807 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:51.807 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:53.722 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:53.722 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:53.722 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.722 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:53.722 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.722 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:53.722 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:53.983 [global] 00:11:53.983 thread=1 00:11:53.983 invalidate=1 00:11:53.983 rw=write 00:11:53.983 time_based=1 00:11:53.983 runtime=1 00:11:53.983 ioengine=libaio 00:11:53.983 direct=1 00:11:53.983 bs=4096 00:11:53.983 iodepth=1 00:11:53.983 norandommap=0 00:11:53.983 numjobs=1 00:11:53.983 00:11:53.983 verify_dump=1 00:11:53.983 verify_backlog=512 00:11:53.983 verify_state_save=0 00:11:53.983 do_verify=1 00:11:53.983 verify=crc32c-intel 00:11:53.983 [job0] 00:11:53.983 filename=/dev/nvme0n1 00:11:53.983 [job1] 00:11:53.983 filename=/dev/nvme0n2 00:11:53.983 [job2] 00:11:53.983 filename=/dev/nvme0n3 00:11:53.983 [job3] 00:11:53.983 filename=/dev/nvme0n4 00:11:53.983 Could not set queue depth (nvme0n1) 00:11:53.983 Could not set queue depth (nvme0n2) 00:11:53.983 Could not set queue depth (nvme0n3) 00:11:53.983 Could not set queue depth (nvme0n4) 00:11:54.244 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:54.244 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:54.244 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:54.244 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:54.244 fio-3.35 00:11:54.244 Starting 4 threads 00:11:55.630 00:11:55.630 job0: (groupid=0, jobs=1): err= 0: pid=196136: Tue Nov 19 09:29:42 2024 00:11:55.630 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:55.630 slat (nsec): min=24035, max=25354, avg=24638.03, stdev=162.04 00:11:55.630 clat (usec): min=671, max=1125, avg=962.81, stdev=46.69 00:11:55.630 lat (usec): min=695, max=1149, avg=987.45, stdev=46.64 00:11:55.630 clat percentiles (usec): 00:11:55.630 | 1.00th=[ 824], 5.00th=[ 873], 10.00th=[ 922], 20.00th=[ 938], 00:11:55.630 | 30.00th=[ 947], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 971], 00:11:55.630 | 70.00th=[ 988], 80.00th=[ 996], 90.00th=[ 1012], 95.00th=[ 1029], 00:11:55.630 | 99.00th=[ 1074], 99.50th=[ 1074], 99.90th=[ 1123], 99.95th=[ 1123], 00:11:55.630 | 99.99th=[ 1123] 00:11:55.630 write: IOPS=819, BW=3277KiB/s (3355kB/s)(3280KiB/1001msec); 0 zone resets 00:11:55.630 slat (nsec): min=9626, max=73743, avg=30439.62, stdev=7804.62 00:11:55.630 clat (usec): min=163, max=986, avg=560.24, stdev=130.60 00:11:55.630 lat (usec): min=195, max=1019, avg=590.68, stdev=132.44 00:11:55.630 clat percentiles (usec): 00:11:55.630 | 1.00th=[ 258], 5.00th=[ 351], 10.00th=[ 379], 20.00th=[ 437], 00:11:55.630 | 30.00th=[ 490], 40.00th=[ 529], 50.00th=[ 570], 60.00th=[ 603], 00:11:55.630 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 725], 95.00th=[ 750], 00:11:55.630 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 988], 99.95th=[ 988], 00:11:55.630 | 99.99th=[ 988] 00:11:55.630 bw ( KiB/s): min= 4096, max= 4096, per=44.64%, avg=4096.00, stdev= 0.00, samples=1 00:11:55.630 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:55.630 lat (usec) : 250=0.53%, 500=19.67%, 750=38.44%, 1000=34.83% 00:11:55.630 lat (msec) : 2=6.53% 00:11:55.630 cpu : usr=1.00%, sys=5.00%, ctx=1333, majf=0, minf=1 00:11:55.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:55.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.631 issued rwts: total=512,820,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.631 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:55.631 job1: (groupid=0, jobs=1): err= 0: pid=196137: Tue Nov 19 09:29:42 2024 00:11:55.631 read: IOPS=21, BW=86.4KiB/s (88.4kB/s)(88.0KiB/1019msec) 00:11:55.631 slat (nsec): min=25657, max=26342, avg=26040.14, stdev=189.61 00:11:55.631 clat (usec): min=623, max=42526, avg=34346.31, stdev=16203.58 00:11:55.631 lat (usec): min=649, max=42552, avg=34372.35, stdev=16203.54 00:11:55.631 clat percentiles (usec): 00:11:55.631 | 1.00th=[ 627], 5.00th=[ 766], 10.00th=[ 848], 20.00th=[41157], 00:11:55.631 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:55.631 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:55.631 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:55.631 | 99.99th=[42730] 00:11:55.631 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:11:55.631 slat (usec): min=9, max=723, avg=28.50, stdev=32.84 00:11:55.631 clat (usec): min=162, max=1223, avg=478.44, stdev=133.98 00:11:55.631 lat (usec): min=173, max=1234, avg=506.94, stdev=140.97 00:11:55.631 clat percentiles (usec): 00:11:55.631 | 1.00th=[ 227], 5.00th=[ 269], 10.00th=[ 289], 20.00th=[ 371], 00:11:55.631 | 30.00th=[ 404], 40.00th=[ 449], 50.00th=[ 486], 60.00th=[ 515], 00:11:55.631 | 70.00th=[ 545], 80.00th=[ 586], 90.00th=[ 627], 95.00th=[ 676], 00:11:55.631 | 99.00th=[ 873], 99.50th=[ 930], 99.90th=[ 1221], 99.95th=[ 1221], 00:11:55.631 | 99.99th=[ 1221] 00:11:55.631 bw ( KiB/s): min= 4096, max= 4096, per=44.64%, avg=4096.00, stdev= 0.00, samples=1 00:11:55.631 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:55.631 lat (usec) : 250=2.43%, 500=50.75%, 750=41.01%, 1000=2.25% 00:11:55.631 lat (msec) : 2=0.19%, 50=3.37% 00:11:55.631 cpu : usr=0.59%, sys=1.47%, ctx=536, majf=0, minf=1 00:11:55.631 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:55.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.631 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.631 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:55.631 job2: (groupid=0, jobs=1): err= 0: pid=196138: Tue Nov 19 09:29:42 2024 00:11:55.631 read: IOPS=17, BW=70.1KiB/s (71.8kB/s)(72.0KiB/1027msec) 00:11:55.631 slat (nsec): min=27271, max=28403, avg=27821.61, stdev=278.35 00:11:55.631 clat (usec): min=923, max=42047, avg=38944.14, stdev=9495.91 00:11:55.631 lat (usec): min=951, max=42075, avg=38971.96, stdev=9495.97 00:11:55.631 clat percentiles (usec): 00:11:55.631 | 1.00th=[ 922], 5.00th=[ 922], 10.00th=[40633], 20.00th=[41157], 00:11:55.631 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:55.631 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:11:55.631 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:55.631 | 99.99th=[42206] 00:11:55.631 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:11:55.631 slat (nsec): min=9938, max=67896, avg=33066.53, stdev=9263.59 00:11:55.631 clat (usec): min=206, max=929, avg=595.54, stdev=138.11 00:11:55.631 lat (usec): min=217, max=972, avg=628.61, stdev=140.53 00:11:55.631 clat percentiles (usec): 00:11:55.631 | 1.00th=[ 249], 5.00th=[ 363], 10.00th=[ 420], 20.00th=[ 478], 00:11:55.631 | 30.00th=[ 523], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 635], 00:11:55.631 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 824], 00:11:55.631 | 99.00th=[ 881], 99.50th=[ 889], 99.90th=[ 930], 99.95th=[ 930], 00:11:55.631 | 99.99th=[ 930] 00:11:55.631 bw ( KiB/s): min= 4096, max= 4096, per=44.64%, avg=4096.00, stdev= 0.00, samples=1 00:11:55.631 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:55.631 lat (usec) : 250=1.13%, 500=23.77%, 750=59.62%, 1000=12.26% 00:11:55.631 lat (msec) : 50=3.21% 00:11:55.631 cpu : usr=0.49%, sys=2.63%, ctx=531, majf=0, minf=1 00:11:55.631 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:55.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.631 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.631 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:55.631 job3: (groupid=0, jobs=1): err= 0: pid=196139: Tue Nov 19 09:29:42 2024 00:11:55.631 read: IOPS=106, BW=427KiB/s (437kB/s)(428KiB/1002msec) 00:11:55.631 slat (nsec): min=10950, max=43266, avg=26947.50, stdev=2603.01 00:11:55.631 clat (usec): min=630, max=42030, avg=6353.91, stdev=13723.44 00:11:55.631 lat (usec): min=658, max=42058, avg=6380.86, stdev=13723.69 00:11:55.631 clat percentiles (usec): 00:11:55.631 | 1.00th=[ 766], 5.00th=[ 840], 10.00th=[ 881], 20.00th=[ 963], 00:11:55.631 | 30.00th=[ 996], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1123], 00:11:55.631 | 70.00th=[ 1172], 80.00th=[ 1221], 90.00th=[41157], 95.00th=[41681], 00:11:55.631 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:55.631 | 99.99th=[42206] 00:11:55.631 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:11:55.631 slat (nsec): min=10184, max=69848, avg=32050.25, stdev=9629.74 00:11:55.631 clat (usec): min=233, max=1000, avg=581.30, stdev=139.89 00:11:55.631 lat (usec): min=244, max=1035, avg=613.35, stdev=144.02 00:11:55.631 clat percentiles (usec): 00:11:55.631 | 1.00th=[ 277], 5.00th=[ 343], 10.00th=[ 392], 20.00th=[ 453], 00:11:55.631 | 30.00th=[ 515], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 619], 00:11:55.631 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 758], 95.00th=[ 799], 00:11:55.631 | 99.00th=[ 898], 99.50th=[ 971], 99.90th=[ 1004], 99.95th=[ 1004], 00:11:55.631 | 99.99th=[ 1004] 00:11:55.631 bw ( KiB/s): min= 4096, max= 4096, per=44.64%, avg=4096.00, stdev= 0.00, samples=1 00:11:55.631 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:55.631 lat (usec) : 250=0.32%, 500=22.94%, 750=50.73%, 1000=13.89% 00:11:55.631 lat (msec) : 2=9.85%, 50=2.26% 00:11:55.631 cpu : usr=0.50%, sys=2.30%, ctx=621, majf=0, minf=1 00:11:55.631 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:55.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.631 issued rwts: total=107,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.631 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:55.631 00:11:55.631 Run status group 0 (all jobs): 00:11:55.631 READ: bw=2567KiB/s (2628kB/s), 70.1KiB/s-2046KiB/s (71.8kB/s-2095kB/s), io=2636KiB (2699kB), run=1001-1027msec 00:11:55.631 WRITE: bw=9176KiB/s (9396kB/s), 1994KiB/s-3277KiB/s (2042kB/s-3355kB/s), io=9424KiB (9650kB), run=1001-1027msec 00:11:55.631 00:11:55.631 Disk stats (read/write): 00:11:55.631 nvme0n1: ios=562/547, merge=0/0, ticks=575/281, in_queue=856, util=86.77% 00:11:55.631 nvme0n2: ios=67/512, merge=0/0, ticks=797/238, in_queue=1035, util=96.11% 00:11:55.631 nvme0n3: ios=42/512, merge=0/0, ticks=1413/245, in_queue=1658, util=96.18% 00:11:55.631 nvme0n4: ios=77/512, merge=0/0, ticks=1324/280, in_queue=1604, util=96.13% 00:11:55.631 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:55.631 [global] 00:11:55.631 thread=1 00:11:55.632 invalidate=1 00:11:55.632 rw=randwrite 00:11:55.632 time_based=1 00:11:55.632 runtime=1 00:11:55.632 ioengine=libaio 00:11:55.632 direct=1 00:11:55.632 bs=4096 00:11:55.632 iodepth=1 00:11:55.632 norandommap=0 00:11:55.632 numjobs=1 00:11:55.632 00:11:55.632 verify_dump=1 00:11:55.632 verify_backlog=512 00:11:55.632 verify_state_save=0 00:11:55.632 do_verify=1 00:11:55.632 verify=crc32c-intel 00:11:55.632 [job0] 00:11:55.632 filename=/dev/nvme0n1 00:11:55.632 [job1] 00:11:55.632 filename=/dev/nvme0n2 00:11:55.632 [job2] 00:11:55.632 filename=/dev/nvme0n3 00:11:55.632 [job3] 00:11:55.632 filename=/dev/nvme0n4 00:11:55.632 Could not set queue depth (nvme0n1) 00:11:55.632 Could not set queue depth (nvme0n2) 00:11:55.632 Could not set queue depth (nvme0n3) 00:11:55.632 Could not set queue depth (nvme0n4) 00:11:55.892 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:55.893 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:55.893 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:55.893 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:55.893 fio-3.35 00:11:55.893 Starting 4 threads 00:11:57.281 00:11:57.281 job0: (groupid=0, jobs=1): err= 0: pid=196664: Tue Nov 19 09:29:43 2024 00:11:57.281 read: IOPS=16, BW=67.7KiB/s (69.3kB/s)(68.0KiB/1005msec) 00:11:57.281 slat (nsec): min=26450, max=27149, avg=26688.12, stdev=164.19 00:11:57.281 clat (usec): min=1186, max=42087, avg=39435.87, stdev=9862.39 00:11:57.281 lat (usec): min=1212, max=42113, avg=39462.56, stdev=9862.45 00:11:57.281 clat percentiles (usec): 00:11:57.281 | 1.00th=[ 1188], 5.00th=[ 1188], 10.00th=[41157], 20.00th=[41681], 00:11:57.281 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:57.281 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:57.281 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:57.281 | 99.99th=[42206] 00:11:57.281 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:11:57.281 slat (usec): min=9, max=5471, avg=40.31, stdev=240.67 00:11:57.281 clat (usec): min=332, max=1133, avg=602.45, stdev=106.74 00:11:57.281 lat (usec): min=342, max=6295, avg=642.76, stdev=273.51 00:11:57.281 clat percentiles (usec): 00:11:57.281 | 1.00th=[ 343], 5.00th=[ 420], 10.00th=[ 453], 20.00th=[ 515], 00:11:57.281 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:11:57.281 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 734], 95.00th=[ 750], 00:11:57.281 | 99.00th=[ 799], 99.50th=[ 824], 99.90th=[ 1139], 99.95th=[ 1139], 00:11:57.281 | 99.99th=[ 1139] 00:11:57.281 bw ( KiB/s): min= 4087, max= 4087, per=38.26%, avg=4087.00, stdev= 0.00, samples=1 00:11:57.281 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:57.281 lat (usec) : 500=16.82%, 750=74.67%, 1000=5.10% 00:11:57.281 lat (msec) : 2=0.38%, 50=3.02% 00:11:57.281 cpu : usr=0.70%, sys=2.29%, ctx=533, majf=0, minf=1 00:11:57.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:57.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.281 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:57.281 job1: (groupid=0, jobs=1): err= 0: pid=196665: Tue Nov 19 09:29:43 2024 00:11:57.281 read: IOPS=17, BW=69.3KiB/s (71.0kB/s)(72.0KiB/1039msec) 00:11:57.281 slat (nsec): min=9823, max=28161, avg=25209.39, stdev=3886.15 00:11:57.281 clat (usec): min=939, max=42080, avg=39497.41, stdev=9630.25 00:11:57.281 lat (usec): min=949, max=42106, avg=39522.62, stdev=9634.08 00:11:57.281 clat percentiles (usec): 00:11:57.281 | 1.00th=[ 938], 5.00th=[ 938], 10.00th=[41157], 20.00th=[41157], 00:11:57.281 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:57.281 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:57.281 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:57.281 | 99.99th=[42206] 00:11:57.281 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:11:57.281 slat (nsec): min=8866, max=53350, avg=27929.09, stdev=9773.70 00:11:57.281 clat (usec): min=206, max=915, avg=601.44, stdev=120.13 00:11:57.281 lat (usec): min=216, max=954, avg=629.37, stdev=124.94 00:11:57.281 clat percentiles (usec): 00:11:57.281 | 1.00th=[ 310], 5.00th=[ 396], 10.00th=[ 433], 20.00th=[ 490], 00:11:57.281 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 603], 60.00th=[ 644], 00:11:57.281 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 775], 00:11:57.281 | 99.00th=[ 816], 99.50th=[ 889], 99.90th=[ 914], 99.95th=[ 914], 00:11:57.281 | 99.99th=[ 914] 00:11:57.281 bw ( KiB/s): min= 4087, max= 4087, per=38.26%, avg=4087.00, stdev= 0.00, samples=1 00:11:57.281 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:57.281 lat (usec) : 250=0.57%, 500=20.38%, 750=66.60%, 1000=9.25% 00:11:57.281 lat (msec) : 50=3.21% 00:11:57.281 cpu : usr=0.96%, sys=1.83%, ctx=531, majf=0, minf=1 00:11:57.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:57.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.281 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:57.281 job2: (groupid=0, jobs=1): err= 0: pid=196666: Tue Nov 19 09:29:43 2024 00:11:57.281 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:57.281 slat (nsec): min=7342, max=46210, avg=26277.99, stdev=1638.88 00:11:57.281 clat (usec): min=705, max=41794, avg=1049.00, stdev=1805.89 00:11:57.281 lat (usec): min=712, max=41820, avg=1075.28, stdev=1805.88 00:11:57.281 clat percentiles (usec): 00:11:57.281 | 1.00th=[ 742], 5.00th=[ 832], 10.00th=[ 873], 20.00th=[ 922], 00:11:57.281 | 30.00th=[ 938], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 996], 00:11:57.281 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1045], 95.00th=[ 1074], 00:11:57.281 | 99.00th=[ 1123], 99.50th=[ 1205], 99.90th=[41681], 99.95th=[41681], 00:11:57.281 | 99.99th=[41681] 00:11:57.281 write: IOPS=726, BW=2905KiB/s (2975kB/s)(2908KiB/1001msec); 0 zone resets 00:11:57.281 slat (nsec): min=8895, max=54164, avg=28628.78, stdev=9946.86 00:11:57.281 clat (usec): min=226, max=863, avg=576.27, stdev=111.03 00:11:57.281 lat (usec): min=236, max=896, avg=604.90, stdev=115.36 00:11:57.281 clat percentiles (usec): 00:11:57.281 | 1.00th=[ 326], 5.00th=[ 375], 10.00th=[ 437], 20.00th=[ 478], 00:11:57.281 | 30.00th=[ 519], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 603], 00:11:57.281 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 725], 95.00th=[ 750], 00:11:57.281 | 99.00th=[ 799], 99.50th=[ 816], 99.90th=[ 865], 99.95th=[ 865], 00:11:57.281 | 99.99th=[ 865] 00:11:57.281 bw ( KiB/s): min= 4087, max= 4087, per=38.26%, avg=4087.00, stdev= 0.00, samples=1 00:11:57.281 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:57.281 lat (usec) : 250=0.16%, 500=14.69%, 750=41.49%, 1000=28.89% 00:11:57.281 lat (msec) : 2=14.69%, 50=0.08% 00:11:57.281 cpu : usr=3.10%, sys=4.10%, ctx=1239, majf=0, minf=2 00:11:57.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:57.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.281 issued rwts: total=512,727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:57.281 job3: (groupid=0, jobs=1): err= 0: pid=196667: Tue Nov 19 09:29:43 2024 00:11:57.281 read: IOPS=578, BW=2314KiB/s (2369kB/s)(2316KiB/1001msec) 00:11:57.281 slat (nsec): min=7177, max=60704, avg=26139.13, stdev=4636.78 00:11:57.281 clat (usec): min=353, max=1154, avg=857.94, stdev=111.58 00:11:57.281 lat (usec): min=379, max=1180, avg=884.08, stdev=111.92 00:11:57.281 clat percentiles (usec): 00:11:57.281 | 1.00th=[ 562], 5.00th=[ 660], 10.00th=[ 717], 20.00th=[ 766], 00:11:57.281 | 30.00th=[ 807], 40.00th=[ 840], 50.00th=[ 873], 60.00th=[ 898], 00:11:57.281 | 70.00th=[ 922], 80.00th=[ 947], 90.00th=[ 988], 95.00th=[ 1012], 00:11:57.281 | 99.00th=[ 1074], 99.50th=[ 1123], 99.90th=[ 1156], 99.95th=[ 1156], 00:11:57.281 | 99.99th=[ 1156] 00:11:57.281 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:57.282 slat (nsec): min=9517, max=62346, avg=29691.04, stdev=9245.24 00:11:57.282 clat (usec): min=117, max=1035, avg=434.46, stdev=160.04 00:11:57.282 lat (usec): min=127, max=1068, avg=464.15, stdev=163.42 00:11:57.282 clat percentiles (usec): 00:11:57.282 | 1.00th=[ 145], 5.00th=[ 223], 10.00th=[ 258], 20.00th=[ 297], 00:11:57.282 | 30.00th=[ 330], 40.00th=[ 371], 50.00th=[ 412], 60.00th=[ 453], 00:11:57.282 | 70.00th=[ 498], 80.00th=[ 553], 90.00th=[ 652], 95.00th=[ 734], 00:11:57.282 | 99.00th=[ 898], 99.50th=[ 963], 99.90th=[ 1020], 99.95th=[ 1037], 00:11:57.282 | 99.99th=[ 1037] 00:11:57.282 bw ( KiB/s): min= 4087, max= 4087, per=38.26%, avg=4087.00, stdev= 0.00, samples=1 00:11:57.282 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:57.282 lat (usec) : 250=5.68%, 500=39.36%, 750=22.02%, 1000=29.88% 00:11:57.282 lat (msec) : 2=3.06% 00:11:57.282 cpu : usr=3.00%, sys=4.10%, ctx=1604, majf=0, minf=1 00:11:57.282 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:57.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.282 issued rwts: total=579,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.282 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:57.282 00:11:57.282 Run status group 0 (all jobs): 00:11:57.282 READ: bw=4335KiB/s (4439kB/s), 67.7KiB/s-2314KiB/s (69.3kB/s-2369kB/s), io=4504KiB (4612kB), run=1001-1039msec 00:11:57.282 WRITE: bw=10.4MiB/s (10.9MB/s), 1971KiB/s-4092KiB/s (2018kB/s-4190kB/s), io=10.8MiB (11.4MB), run=1001-1039msec 00:11:57.282 00:11:57.282 Disk stats (read/write): 00:11:57.282 nvme0n1: ios=66/512, merge=0/0, ticks=696/246, in_queue=942, util=84.97% 00:11:57.282 nvme0n2: ios=63/512, merge=0/0, ticks=592/235, in_queue=827, util=91.34% 00:11:57.282 nvme0n3: ios=532/512, merge=0/0, ticks=584/235, in_queue=819, util=95.04% 00:11:57.282 nvme0n4: ios=561/758, merge=0/0, ticks=692/333, in_queue=1025, util=97.23% 00:11:57.282 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:57.282 [global] 00:11:57.282 thread=1 00:11:57.282 invalidate=1 00:11:57.282 rw=write 00:11:57.282 time_based=1 00:11:57.282 runtime=1 00:11:57.282 ioengine=libaio 00:11:57.282 direct=1 00:11:57.282 bs=4096 00:11:57.282 iodepth=128 00:11:57.282 norandommap=0 00:11:57.282 numjobs=1 00:11:57.282 00:11:57.282 verify_dump=1 00:11:57.282 verify_backlog=512 00:11:57.282 verify_state_save=0 00:11:57.282 do_verify=1 00:11:57.282 verify=crc32c-intel 00:11:57.282 [job0] 00:11:57.282 filename=/dev/nvme0n1 00:11:57.282 [job1] 00:11:57.282 filename=/dev/nvme0n2 00:11:57.282 [job2] 00:11:57.282 filename=/dev/nvme0n3 00:11:57.282 [job3] 00:11:57.282 filename=/dev/nvme0n4 00:11:57.282 Could not set queue depth (nvme0n1) 00:11:57.282 Could not set queue depth (nvme0n2) 00:11:57.282 Could not set queue depth (nvme0n3) 00:11:57.282 Could not set queue depth (nvme0n4) 00:11:57.543 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:57.543 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:57.543 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:57.543 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:57.543 fio-3.35 00:11:57.543 Starting 4 threads 00:11:58.930 00:11:58.930 job0: (groupid=0, jobs=1): err= 0: pid=197191: Tue Nov 19 09:29:45 2024 00:11:58.930 read: IOPS=7611, BW=29.7MiB/s (31.2MB/s)(30.0MiB/1009msec) 00:11:58.930 slat (nsec): min=879, max=9576.6k, avg=59166.44, stdev=442450.27 00:11:58.930 clat (usec): min=2949, max=31294, avg=7709.70, stdev=3255.95 00:11:58.930 lat (usec): min=2952, max=31301, avg=7768.86, stdev=3293.66 00:11:58.930 clat percentiles (usec): 00:11:58.930 | 1.00th=[ 4424], 5.00th=[ 5080], 10.00th=[ 5342], 20.00th=[ 5800], 00:11:58.930 | 30.00th=[ 6128], 40.00th=[ 6521], 50.00th=[ 6849], 60.00th=[ 7308], 00:11:58.930 | 70.00th=[ 7767], 80.00th=[ 8586], 90.00th=[10421], 95.00th=[13173], 00:11:58.930 | 99.00th=[20055], 99.50th=[25822], 99.90th=[30540], 99.95th=[30540], 00:11:58.930 | 99.99th=[31327] 00:11:58.931 write: IOPS=8013, BW=31.3MiB/s (32.8MB/s)(31.6MiB/1009msec); 0 zone resets 00:11:58.931 slat (nsec): min=1542, max=7436.9k, avg=63193.54, stdev=425152.62 00:11:58.931 clat (usec): min=1314, max=75660, avg=8510.09, stdev=10269.70 00:11:58.931 lat (usec): min=1324, max=75669, avg=8573.28, stdev=10337.97 00:11:58.931 clat percentiles (usec): 00:11:58.931 | 1.00th=[ 2442], 5.00th=[ 3589], 10.00th=[ 3949], 20.00th=[ 4817], 00:11:58.931 | 30.00th=[ 5604], 40.00th=[ 5932], 50.00th=[ 6128], 60.00th=[ 6325], 00:11:58.931 | 70.00th=[ 6783], 80.00th=[ 7308], 90.00th=[12125], 95.00th=[21890], 00:11:58.931 | 99.00th=[67634], 99.50th=[70779], 99.90th=[74974], 99.95th=[76022], 00:11:58.931 | 99.99th=[76022] 00:11:58.931 bw ( KiB/s): min=22704, max=40960, per=35.81%, avg=31832.00, stdev=12908.94, samples=2 00:11:58.931 iops : min= 5676, max=10240, avg=7958.00, stdev=3227.24, samples=2 00:11:58.931 lat (msec) : 2=0.20%, 4=5.65%, 10=82.04%, 20=8.80%, 50=1.99% 00:11:58.931 lat (msec) : 100=1.31% 00:11:58.931 cpu : usr=5.46%, sys=6.75%, ctx=564, majf=0, minf=1 00:11:58.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:58.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:58.931 issued rwts: total=7680,8086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:58.931 job1: (groupid=0, jobs=1): err= 0: pid=197192: Tue Nov 19 09:29:45 2024 00:11:58.931 read: IOPS=6106, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:11:58.931 slat (nsec): min=891, max=17324k, avg=77560.99, stdev=694657.21 00:11:58.931 clat (usec): min=1796, max=48116, avg=10410.99, stdev=6124.82 00:11:58.931 lat (usec): min=1809, max=48126, avg=10488.55, stdev=6184.96 00:11:58.931 clat percentiles (usec): 00:11:58.931 | 1.00th=[ 2343], 5.00th=[ 5342], 10.00th=[ 6128], 20.00th=[ 6849], 00:11:58.931 | 30.00th=[ 7308], 40.00th=[ 7635], 50.00th=[ 8455], 60.00th=[ 9241], 00:11:58.931 | 70.00th=[10945], 80.00th=[12518], 90.00th=[17433], 95.00th=[23987], 00:11:58.931 | 99.00th=[40633], 99.50th=[43779], 99.90th=[46924], 99.95th=[47973], 00:11:58.931 | 99.99th=[47973] 00:11:58.931 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:11:58.931 slat (nsec): min=1526, max=20391k, avg=73828.95, stdev=561107.01 00:11:58.931 clat (usec): min=1054, max=48572, avg=10349.55, stdev=8197.05 00:11:58.931 lat (usec): min=1064, max=48576, avg=10423.38, stdev=8252.62 00:11:58.931 clat percentiles (usec): 00:11:58.931 | 1.00th=[ 1336], 5.00th=[ 3818], 10.00th=[ 4146], 20.00th=[ 5473], 00:11:58.931 | 30.00th=[ 5866], 40.00th=[ 6456], 50.00th=[ 7046], 60.00th=[ 8717], 00:11:58.931 | 70.00th=[10028], 80.00th=[14877], 90.00th=[20055], 95.00th=[27132], 00:11:58.931 | 99.00th=[45351], 99.50th=[46400], 99.90th=[48497], 99.95th=[48497], 00:11:58.931 | 99.99th=[48497] 00:11:58.931 bw ( KiB/s): min=24576, max=24576, per=27.65%, avg=24576.00, stdev= 0.00, samples=2 00:11:58.931 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:11:58.931 lat (msec) : 2=0.84%, 4=4.23%, 10=63.40%, 20=22.90%, 50=8.64% 00:11:58.931 cpu : usr=4.78%, sys=7.67%, ctx=339, majf=0, minf=2 00:11:58.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:58.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:58.931 issued rwts: total=6137,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:58.931 job2: (groupid=0, jobs=1): err= 0: pid=197195: Tue Nov 19 09:29:45 2024 00:11:58.931 read: IOPS=3305, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1006msec) 00:11:58.931 slat (nsec): min=912, max=14201k, avg=112480.08, stdev=878850.60 00:11:58.931 clat (usec): min=2446, max=66484, avg=13542.32, stdev=8375.30 00:11:58.931 lat (usec): min=2788, max=66490, avg=13654.80, stdev=8461.29 00:11:58.931 clat percentiles (usec): 00:11:58.931 | 1.00th=[ 4817], 5.00th=[ 5800], 10.00th=[ 6718], 20.00th=[ 7504], 00:11:58.931 | 30.00th=[ 7832], 40.00th=[ 9765], 50.00th=[10814], 60.00th=[13435], 00:11:58.931 | 70.00th=[15401], 80.00th=[17957], 90.00th=[22676], 95.00th=[28967], 00:11:58.931 | 99.00th=[52691], 99.50th=[60031], 99.90th=[66323], 99.95th=[66323], 00:11:58.931 | 99.99th=[66323] 00:11:58.931 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:11:58.931 slat (nsec): min=1554, max=12143k, avg=158900.70, stdev=823341.53 00:11:58.931 clat (usec): min=680, max=77390, avg=23047.32, stdev=20975.53 00:11:58.931 lat (usec): min=689, max=77399, avg=23206.22, stdev=21127.73 00:11:58.931 clat percentiles (usec): 00:11:58.931 | 1.00th=[ 1860], 5.00th=[ 3523], 10.00th=[ 4817], 20.00th=[ 6390], 00:11:58.931 | 30.00th=[ 8717], 40.00th=[10945], 50.00th=[15270], 60.00th=[18220], 00:11:58.931 | 70.00th=[23987], 80.00th=[49021], 90.00th=[60031], 95.00th=[65799], 00:11:58.931 | 99.00th=[73925], 99.50th=[73925], 99.90th=[77071], 99.95th=[77071], 00:11:58.931 | 99.99th=[77071] 00:11:58.931 bw ( KiB/s): min=12208, max=16464, per=16.13%, avg=14336.00, stdev=3009.45, samples=2 00:11:58.931 iops : min= 3052, max= 4116, avg=3584.00, stdev=752.36, samples=2 00:11:58.931 lat (usec) : 750=0.01%, 1000=0.04% 00:11:58.931 lat (msec) : 2=0.88%, 4=2.66%, 10=37.36%, 20=34.80%, 50=13.56% 00:11:58.931 lat (msec) : 100=10.68% 00:11:58.931 cpu : usr=2.49%, sys=3.38%, ctx=316, majf=0, minf=1 00:11:58.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:58.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:58.931 issued rwts: total=3325,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:58.931 job3: (groupid=0, jobs=1): err= 0: pid=197196: Tue Nov 19 09:29:45 2024 00:11:58.931 read: IOPS=4312, BW=16.8MiB/s (17.7MB/s)(16.9MiB/1006msec) 00:11:58.931 slat (nsec): min=998, max=17345k, avg=106458.54, stdev=817589.68 00:11:58.931 clat (usec): min=3303, max=76157, avg=12600.20, stdev=8458.89 00:11:58.931 lat (usec): min=3312, max=76164, avg=12706.66, stdev=8545.84 00:11:58.931 clat percentiles (usec): 00:11:58.931 | 1.00th=[ 6063], 5.00th=[ 7046], 10.00th=[ 7111], 20.00th=[ 7504], 00:11:58.931 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[10552], 00:11:58.931 | 70.00th=[13435], 80.00th=[15270], 90.00th=[21365], 95.00th=[26870], 00:11:58.931 | 99.00th=[55837], 99.50th=[65274], 99.90th=[76022], 99.95th=[76022], 00:11:58.931 | 99.99th=[76022] 00:11:58.931 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:11:58.931 slat (nsec): min=1699, max=22227k, avg=111274.98, stdev=731014.70 00:11:58.931 clat (usec): min=956, max=76155, avg=15778.74, stdev=14527.49 00:11:58.931 lat (usec): min=2379, max=76168, avg=15890.01, stdev=14630.08 00:11:58.931 clat percentiles (usec): 00:11:58.931 | 1.00th=[ 3458], 5.00th=[ 4948], 10.00th=[ 6325], 20.00th=[ 6587], 00:11:58.931 | 30.00th=[ 6783], 40.00th=[ 7963], 50.00th=[10028], 60.00th=[13698], 00:11:58.931 | 70.00th=[17171], 80.00th=[18482], 90.00th=[31327], 95.00th=[56886], 00:11:58.931 | 99.00th=[63177], 99.50th=[64750], 99.90th=[66847], 99.95th=[66847], 00:11:58.931 | 99.99th=[76022] 00:11:58.931 bw ( KiB/s): min=11888, max=24976, per=20.74%, avg=18432.00, stdev=9254.61, samples=2 00:11:58.931 iops : min= 2972, max= 6244, avg=4608.00, stdev=2313.65, samples=2 00:11:58.931 lat (usec) : 1000=0.01% 00:11:58.931 lat (msec) : 2=0.01%, 4=0.84%, 10=51.40%, 20=32.74%, 50=10.14% 00:11:58.931 lat (msec) : 100=4.86% 00:11:58.931 cpu : usr=3.58%, sys=5.17%, ctx=366, majf=0, minf=1 00:11:58.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:58.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:58.931 issued rwts: total=4338,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:58.931 00:11:58.931 Run status group 0 (all jobs): 00:11:58.931 READ: bw=83.2MiB/s (87.2MB/s), 12.9MiB/s-29.7MiB/s (13.5MB/s-31.2MB/s), io=83.9MiB (88.0MB), run=1005-1009msec 00:11:58.931 WRITE: bw=86.8MiB/s (91.0MB/s), 13.9MiB/s-31.3MiB/s (14.6MB/s-32.8MB/s), io=87.6MiB (91.8MB), run=1005-1009msec 00:11:58.931 00:11:58.931 Disk stats (read/write): 00:11:58.931 nvme0n1: ios=7472/7680, merge=0/0, ticks=52179/47970, in_queue=100149, util=87.98% 00:11:58.931 nvme0n2: ios=4645/4754, merge=0/0, ticks=49653/51056, in_queue=100709, util=87.87% 00:11:58.931 nvme0n3: ios=2048/2466, merge=0/0, ticks=31674/70580, in_queue=102254, util=88.40% 00:11:58.931 nvme0n4: ios=3129/3584, merge=0/0, ticks=40435/61453, in_queue=101888, util=96.90% 00:11:58.931 09:29:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:58.931 [global] 00:11:58.931 thread=1 00:11:58.931 invalidate=1 00:11:58.931 rw=randwrite 00:11:58.931 time_based=1 00:11:58.931 runtime=1 00:11:58.931 ioengine=libaio 00:11:58.931 direct=1 00:11:58.931 bs=4096 00:11:58.931 iodepth=128 00:11:58.931 norandommap=0 00:11:58.931 numjobs=1 00:11:58.931 00:11:58.931 verify_dump=1 00:11:58.931 verify_backlog=512 00:11:58.931 verify_state_save=0 00:11:58.932 do_verify=1 00:11:58.932 verify=crc32c-intel 00:11:58.932 [job0] 00:11:58.932 filename=/dev/nvme0n1 00:11:58.932 [job1] 00:11:58.932 filename=/dev/nvme0n2 00:11:58.932 [job2] 00:11:58.932 filename=/dev/nvme0n3 00:11:58.932 [job3] 00:11:58.932 filename=/dev/nvme0n4 00:11:58.932 Could not set queue depth (nvme0n1) 00:11:58.932 Could not set queue depth (nvme0n2) 00:11:58.932 Could not set queue depth (nvme0n3) 00:11:58.932 Could not set queue depth (nvme0n4) 00:11:59.192 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:59.192 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:59.192 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:59.192 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:59.192 fio-3.35 00:11:59.192 Starting 4 threads 00:12:00.582 00:12:00.582 job0: (groupid=0, jobs=1): err= 0: pid=197715: Tue Nov 19 09:29:47 2024 00:12:00.582 read: IOPS=6794, BW=26.5MiB/s (27.8MB/s)(26.6MiB/1004msec) 00:12:00.582 slat (nsec): min=925, max=7885.8k, avg=73807.12, stdev=404847.23 00:12:00.582 clat (usec): min=1338, max=30612, avg=9257.34, stdev=2828.03 00:12:00.582 lat (usec): min=3989, max=37312, avg=9331.14, stdev=2854.31 00:12:00.582 clat percentiles (usec): 00:12:00.582 | 1.00th=[ 5342], 5.00th=[ 6849], 10.00th=[ 7570], 20.00th=[ 7963], 00:12:00.582 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8979], 00:12:00.582 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[11338], 95.00th=[13435], 00:12:00.582 | 99.00th=[23200], 99.50th=[29492], 99.90th=[30540], 99.95th=[30540], 00:12:00.582 | 99.99th=[30540] 00:12:00.582 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:12:00.582 slat (nsec): min=1551, max=9600.2k, avg=66383.30, stdev=391656.20 00:12:00.582 clat (usec): min=3493, max=30172, avg=8864.51, stdev=3319.26 00:12:00.582 lat (usec): min=3502, max=30204, avg=8930.89, stdev=3344.95 00:12:00.582 clat percentiles (usec): 00:12:00.582 | 1.00th=[ 4686], 5.00th=[ 5538], 10.00th=[ 6325], 20.00th=[ 6915], 00:12:00.582 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8225], 60.00th=[ 8455], 00:12:00.582 | 70.00th=[ 8717], 80.00th=[ 9503], 90.00th=[11731], 95.00th=[15139], 00:12:00.582 | 99.00th=[24773], 99.50th=[26608], 99.90th=[27657], 99.95th=[27919], 00:12:00.582 | 99.99th=[30278] 00:12:00.582 bw ( KiB/s): min=26848, max=30496, per=28.59%, avg=28672.00, stdev=2579.53, samples=2 00:12:00.582 iops : min= 6712, max= 7624, avg=7168.00, stdev=644.88, samples=2 00:12:00.582 lat (msec) : 2=0.01%, 4=0.18%, 10=84.55%, 20=13.05%, 50=2.22% 00:12:00.582 cpu : usr=3.29%, sys=3.69%, ctx=727, majf=0, minf=1 00:12:00.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:12:00.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:00.582 issued rwts: total=6822,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.582 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:00.582 job1: (groupid=0, jobs=1): err= 0: pid=197716: Tue Nov 19 09:29:47 2024 00:12:00.582 read: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec) 00:12:00.582 slat (nsec): min=937, max=15600k, avg=72446.81, stdev=491278.08 00:12:00.582 clat (usec): min=2320, max=30447, avg=9520.40, stdev=3818.32 00:12:00.582 lat (usec): min=2326, max=30450, avg=9592.85, stdev=3850.63 00:12:00.582 clat percentiles (usec): 00:12:00.582 | 1.00th=[ 4228], 5.00th=[ 5932], 10.00th=[ 6652], 20.00th=[ 7373], 00:12:00.582 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:12:00.582 | 70.00th=[ 9372], 80.00th=[10552], 90.00th=[14091], 95.00th=[16909], 00:12:00.582 | 99.00th=[25560], 99.50th=[30016], 99.90th=[30540], 99.95th=[30540], 00:12:00.582 | 99.99th=[30540] 00:12:00.582 write: IOPS=7250, BW=28.3MiB/s (29.7MB/s)(28.4MiB/1004msec); 0 zone resets 00:12:00.582 slat (nsec): min=1560, max=9251.7k, avg=60869.71, stdev=375530.05 00:12:00.582 clat (usec): min=560, max=25903, avg=8062.27, stdev=2921.02 00:12:00.582 lat (usec): min=1247, max=26388, avg=8123.13, stdev=2941.56 00:12:00.582 clat percentiles (usec): 00:12:00.582 | 1.00th=[ 2278], 5.00th=[ 4359], 10.00th=[ 5211], 20.00th=[ 6456], 00:12:00.582 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 7767], 00:12:00.582 | 70.00th=[ 8356], 80.00th=[ 8979], 90.00th=[11207], 95.00th=[14353], 00:12:00.582 | 99.00th=[19530], 99.50th=[22938], 99.90th=[24249], 99.95th=[24511], 00:12:00.582 | 99.99th=[25822] 00:12:00.582 bw ( KiB/s): min=28672, max=28728, per=28.61%, avg=28700.00, stdev=39.60, samples=2 00:12:00.582 iops : min= 7168, max= 7182, avg=7175.00, stdev= 9.90, samples=2 00:12:00.582 lat (usec) : 750=0.01% 00:12:00.582 lat (msec) : 2=0.21%, 4=1.65%, 10=79.78%, 20=16.50%, 50=1.85% 00:12:00.582 cpu : usr=3.79%, sys=5.78%, ctx=725, majf=0, minf=2 00:12:00.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:00.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:00.582 issued rwts: total=7168,7280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.582 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:00.582 job2: (groupid=0, jobs=1): err= 0: pid=197717: Tue Nov 19 09:29:47 2024 00:12:00.582 read: IOPS=5106, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:12:00.582 slat (nsec): min=955, max=14523k, avg=93315.55, stdev=649266.21 00:12:00.582 clat (usec): min=2368, max=32374, avg=12493.48, stdev=4754.27 00:12:00.582 lat (usec): min=2922, max=32400, avg=12586.80, stdev=4792.44 00:12:00.582 clat percentiles (usec): 00:12:00.582 | 1.00th=[ 5342], 5.00th=[ 7635], 10.00th=[ 8225], 20.00th=[ 9110], 00:12:00.582 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[11469], 60.00th=[12518], 00:12:00.582 | 70.00th=[13435], 80.00th=[14746], 90.00th=[18744], 95.00th=[22414], 00:12:00.582 | 99.00th=[29754], 99.50th=[30278], 99.90th=[30802], 99.95th=[30802], 00:12:00.582 | 99.99th=[32375] 00:12:00.582 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:12:00.582 slat (nsec): min=1523, max=11812k, avg=81347.10, stdev=544251.43 00:12:00.582 clat (usec): min=2699, max=42497, avg=11046.11, stdev=3966.82 00:12:00.582 lat (usec): min=2706, max=42499, avg=11127.45, stdev=3999.45 00:12:00.582 clat percentiles (usec): 00:12:00.582 | 1.00th=[ 4146], 5.00th=[ 5800], 10.00th=[ 7701], 20.00th=[ 8225], 00:12:00.582 | 30.00th=[ 9241], 40.00th=[10290], 50.00th=[10945], 60.00th=[11207], 00:12:00.582 | 70.00th=[12125], 80.00th=[12911], 90.00th=[13960], 95.00th=[15795], 00:12:00.582 | 99.00th=[26346], 99.50th=[34341], 99.90th=[38011], 99.95th=[38011], 00:12:00.582 | 99.99th=[42730] 00:12:00.582 bw ( KiB/s): min=21016, max=23040, per=21.96%, avg=22028.00, stdev=1431.18, samples=2 00:12:00.582 iops : min= 5254, max= 5760, avg=5507.00, stdev=357.80, samples=2 00:12:00.582 lat (msec) : 4=0.54%, 10=36.83%, 20=57.37%, 50=5.25% 00:12:00.582 cpu : usr=3.59%, sys=5.59%, ctx=440, majf=0, minf=1 00:12:00.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:00.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:00.583 issued rwts: total=5122,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.583 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:00.583 job3: (groupid=0, jobs=1): err= 0: pid=197718: Tue Nov 19 09:29:47 2024 00:12:00.583 read: IOPS=5023, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1005msec) 00:12:00.583 slat (nsec): min=940, max=10158k, avg=93140.63, stdev=649440.05 00:12:00.583 clat (usec): min=1522, max=55454, avg=12811.42, stdev=6941.42 00:12:00.583 lat (usec): min=1525, max=55458, avg=12904.56, stdev=6992.12 00:12:00.583 clat percentiles (usec): 00:12:00.583 | 1.00th=[ 1975], 5.00th=[ 4621], 10.00th=[ 6259], 20.00th=[ 7832], 00:12:00.583 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[11076], 60.00th=[13173], 00:12:00.583 | 70.00th=[14222], 80.00th=[17171], 90.00th=[21890], 95.00th=[24773], 00:12:00.583 | 99.00th=[36963], 99.50th=[49021], 99.90th=[54264], 99.95th=[55313], 00:12:00.583 | 99.99th=[55313] 00:12:00.583 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:12:00.583 slat (nsec): min=1612, max=11292k, avg=73159.77, stdev=507493.68 00:12:00.583 clat (usec): min=699, max=47916, avg=12180.29, stdev=6954.85 00:12:00.583 lat (usec): min=710, max=47920, avg=12253.45, stdev=6983.95 00:12:00.583 clat percentiles (usec): 00:12:00.583 | 1.00th=[ 1205], 5.00th=[ 3261], 10.00th=[ 5014], 20.00th=[ 6915], 00:12:00.583 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[10290], 60.00th=[12125], 00:12:00.583 | 70.00th=[14484], 80.00th=[18220], 90.00th=[21627], 95.00th=[22938], 00:12:00.583 | 99.00th=[39060], 99.50th=[43254], 99.90th=[46924], 99.95th=[47973], 00:12:00.583 | 99.99th=[47973] 00:12:00.583 bw ( KiB/s): min=17144, max=23816, per=20.42%, avg=20480.00, stdev=4717.82, samples=2 00:12:00.583 iops : min= 4286, max= 5954, avg=5120.00, stdev=1179.45, samples=2 00:12:00.583 lat (usec) : 750=0.06%, 1000=0.04% 00:12:00.583 lat (msec) : 2=2.24%, 4=3.29%, 10=39.82%, 20=40.53%, 50=13.91% 00:12:00.583 lat (msec) : 100=0.11% 00:12:00.583 cpu : usr=3.19%, sys=4.58%, ctx=531, majf=0, minf=1 00:12:00.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:00.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:00.583 issued rwts: total=5049,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.583 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:00.583 00:12:00.583 Run status group 0 (all jobs): 00:12:00.583 READ: bw=93.9MiB/s (98.5MB/s), 19.6MiB/s-27.9MiB/s (20.6MB/s-29.2MB/s), io=94.4MiB (99.0MB), run=1003-1005msec 00:12:00.583 WRITE: bw=97.9MiB/s (103MB/s), 19.9MiB/s-28.3MiB/s (20.9MB/s-29.7MB/s), io=98.4MiB (103MB), run=1003-1005msec 00:12:00.583 00:12:00.583 Disk stats (read/write): 00:12:00.583 nvme0n1: ios=5685/6005, merge=0/0, ticks=17205/17256, in_queue=34461, util=84.47% 00:12:00.583 nvme0n2: ios=6181/6144, merge=0/0, ticks=29967/26937, in_queue=56904, util=88.28% 00:12:00.583 nvme0n3: ios=4524/4608, merge=0/0, ticks=31487/28570, in_queue=60057, util=95.35% 00:12:00.583 nvme0n4: ios=4116/4530, merge=0/0, ticks=36540/38178, in_queue=74718, util=93.59% 00:12:00.583 09:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:00.583 09:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=197958 00:12:00.583 09:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:00.583 09:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:00.583 [global] 00:12:00.583 thread=1 00:12:00.583 invalidate=1 00:12:00.583 rw=read 00:12:00.583 time_based=1 00:12:00.583 runtime=10 00:12:00.583 ioengine=libaio 00:12:00.583 direct=1 00:12:00.583 bs=4096 00:12:00.583 iodepth=1 00:12:00.583 norandommap=1 00:12:00.583 numjobs=1 00:12:00.583 00:12:00.583 [job0] 00:12:00.583 filename=/dev/nvme0n1 00:12:00.583 [job1] 00:12:00.583 filename=/dev/nvme0n2 00:12:00.583 [job2] 00:12:00.583 filename=/dev/nvme0n3 00:12:00.583 [job3] 00:12:00.583 filename=/dev/nvme0n4 00:12:00.583 Could not set queue depth (nvme0n1) 00:12:00.583 Could not set queue depth (nvme0n2) 00:12:00.583 Could not set queue depth (nvme0n3) 00:12:00.583 Could not set queue depth (nvme0n4) 00:12:00.843 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:00.843 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:00.843 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:00.843 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:00.843 fio-3.35 00:12:00.843 Starting 4 threads 00:12:04.147 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:04.147 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:04.147 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2527232, buflen=4096 00:12:04.147 fio: pid=198246, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:04.147 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=14540800, buflen=4096 00:12:04.147 fio: pid=198245, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:04.147 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:04.147 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:04.147 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=12726272, buflen=4096 00:12:04.147 fio: pid=198240, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:04.147 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:04.147 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:04.409 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4632576, buflen=4096 00:12:04.409 fio: pid=198241, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:04.409 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:04.409 09:29:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:04.409 00:12:04.409 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=198240: Tue Nov 19 09:29:50 2024 00:12:04.409 read: IOPS=1040, BW=4159KiB/s (4259kB/s)(12.1MiB/2988msec) 00:12:04.409 slat (usec): min=6, max=22763, avg=42.82, stdev=558.89 00:12:04.409 clat (usec): min=179, max=8567, avg=905.76, stdev=262.23 00:12:04.409 lat (usec): min=204, max=23380, avg=948.58, stdev=614.31 00:12:04.409 clat percentiles (usec): 00:12:04.409 | 1.00th=[ 306], 5.00th=[ 519], 10.00th=[ 627], 20.00th=[ 791], 00:12:04.409 | 30.00th=[ 881], 40.00th=[ 922], 50.00th=[ 963], 60.00th=[ 988], 00:12:04.409 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1090], 00:12:04.409 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[ 1270], 99.95th=[ 8291], 00:12:04.409 | 99.99th=[ 8586] 00:12:04.409 bw ( KiB/s): min= 3976, max= 4728, per=39.74%, avg=4224.00, stdev=341.48, samples=5 00:12:04.409 iops : min= 994, max= 1182, avg=1056.00, stdev=85.37, samples=5 00:12:04.409 lat (usec) : 250=0.48%, 500=3.93%, 750=12.68%, 1000=49.77% 00:12:04.409 lat (msec) : 2=33.04%, 10=0.06% 00:12:04.409 cpu : usr=0.77%, sys=3.38%, ctx=3112, majf=0, minf=1 00:12:04.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.409 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.409 issued rwts: total=3108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.409 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=198241: Tue Nov 19 09:29:50 2024 00:12:04.409 read: IOPS=357, BW=1430KiB/s (1465kB/s)(4524KiB/3163msec) 00:12:04.409 slat (usec): min=6, max=11946, avg=60.34, stdev=603.93 00:12:04.409 clat (usec): min=411, max=42098, avg=2721.04, stdev=8415.82 00:12:04.409 lat (usec): min=420, max=42124, avg=2781.41, stdev=8430.84 00:12:04.409 clat percentiles (usec): 00:12:04.409 | 1.00th=[ 482], 5.00th=[ 578], 10.00th=[ 676], 20.00th=[ 791], 00:12:04.409 | 30.00th=[ 865], 40.00th=[ 922], 50.00th=[ 955], 60.00th=[ 979], 00:12:04.409 | 70.00th=[ 1012], 80.00th=[ 1057], 90.00th=[ 1123], 95.00th=[ 1270], 00:12:04.409 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:04.409 | 99.99th=[42206] 00:12:04.409 bw ( KiB/s): min= 96, max= 4144, per=12.25%, avg=1302.67, stdev=1888.86, samples=6 00:12:04.409 iops : min= 24, max= 1036, avg=325.67, stdev=472.21, samples=6 00:12:04.409 lat (usec) : 500=1.59%, 750=14.31%, 1000=49.82% 00:12:04.409 lat (msec) : 2=29.77%, 50=4.42% 00:12:04.409 cpu : usr=0.19%, sys=1.27%, ctx=1136, majf=0, minf=1 00:12:04.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.409 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.409 issued rwts: total=1132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.409 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=198245: Tue Nov 19 09:29:50 2024 00:12:04.409 read: IOPS=1268, BW=5073KiB/s (5195kB/s)(13.9MiB/2799msec) 00:12:04.409 slat (nsec): min=7200, max=58188, avg=23336.96, stdev=7443.64 00:12:04.409 clat (usec): min=430, max=1328, avg=753.39, stdev=63.12 00:12:04.409 lat (usec): min=452, max=1355, avg=776.73, stdev=64.65 00:12:04.409 clat percentiles (usec): 00:12:04.409 | 1.00th=[ 578], 5.00th=[ 635], 10.00th=[ 660], 20.00th=[ 709], 00:12:04.409 | 30.00th=[ 734], 40.00th=[ 750], 50.00th=[ 766], 60.00th=[ 775], 00:12:04.409 | 70.00th=[ 791], 80.00th=[ 799], 90.00th=[ 824], 95.00th=[ 840], 00:12:04.409 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 922], 99.95th=[ 938], 00:12:04.409 | 99.99th=[ 1336] 00:12:04.409 bw ( KiB/s): min= 5064, max= 5280, per=48.25%, avg=5129.60, stdev=87.34, samples=5 00:12:04.409 iops : min= 1266, max= 1320, avg=1282.40, stdev=21.84, samples=5 00:12:04.409 lat (usec) : 500=0.20%, 750=39.51%, 1000=60.24% 00:12:04.409 lat (msec) : 2=0.03% 00:12:04.409 cpu : usr=1.43%, sys=3.32%, ctx=3551, majf=0, minf=2 00:12:04.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.409 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.409 issued rwts: total=3551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.409 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=198246: Tue Nov 19 09:29:50 2024 00:12:04.409 read: IOPS=234, BW=936KiB/s (958kB/s)(2468KiB/2638msec) 00:12:04.410 slat (nsec): min=7286, max=53898, avg=25461.65, stdev=2783.62 00:12:04.410 clat (usec): min=350, max=42053, avg=4206.55, stdev=10963.52 00:12:04.410 lat (usec): min=375, max=42079, avg=4232.01, stdev=10963.51 00:12:04.410 clat percentiles (usec): 00:12:04.410 | 1.00th=[ 603], 5.00th=[ 717], 10.00th=[ 791], 20.00th=[ 898], 00:12:04.410 | 30.00th=[ 955], 40.00th=[ 1004], 50.00th=[ 1045], 60.00th=[ 1106], 00:12:04.410 | 70.00th=[ 1139], 80.00th=[ 1237], 90.00th=[ 1385], 95.00th=[42206], 00:12:04.410 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:04.410 | 99.99th=[42206] 00:12:04.410 bw ( KiB/s): min= 96, max= 3800, per=9.24%, avg=982.40, stdev=1606.32, samples=5 00:12:04.410 iops : min= 24, max= 950, avg=245.60, stdev=401.58, samples=5 00:12:04.410 lat (usec) : 500=0.49%, 750=5.50%, 1000=33.50% 00:12:04.410 lat (msec) : 2=52.59%, 50=7.77% 00:12:04.410 cpu : usr=0.11%, sys=0.80%, ctx=618, majf=0, minf=2 00:12:04.410 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.410 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.410 issued rwts: total=618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.410 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.410 00:12:04.410 Run status group 0 (all jobs): 00:12:04.410 READ: bw=10.4MiB/s (10.9MB/s), 936KiB/s-5073KiB/s (958kB/s-5195kB/s), io=32.8MiB (34.4MB), run=2638-3163msec 00:12:04.410 00:12:04.410 Disk stats (read/write): 00:12:04.410 nvme0n1: ios=3003/0, merge=0/0, ticks=2733/0, in_queue=2733, util=93.76% 00:12:04.410 nvme0n2: ios=1063/0, merge=0/0, ticks=3019/0, in_queue=3019, util=94.52% 00:12:04.410 nvme0n3: ios=3311/0, merge=0/0, ticks=2429/0, in_queue=2429, util=96.03% 00:12:04.410 nvme0n4: ios=616/0, merge=0/0, ticks=2542/0, in_queue=2542, util=96.46% 00:12:04.410 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:04.410 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:04.671 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:04.671 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:04.933 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:04.933 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:04.933 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:04.933 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:05.195 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:05.195 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 197958 00:12:05.195 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:05.195 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:05.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.195 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:05.195 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:05.195 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:05.195 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.195 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:05.195 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.457 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:05.457 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:05.457 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:05.457 nvmf hotplug test: fio failed as expected 00:12:05.457 09:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.457 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:05.457 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:05.457 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:05.457 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:05.457 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:05.457 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:05.457 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:05.457 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:05.457 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:05.457 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:05.457 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:05.457 rmmod nvme_tcp 00:12:05.457 rmmod nvme_fabrics 00:12:05.457 rmmod nvme_keyring 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 194216 ']' 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 194216 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 194216 ']' 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 194216 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 194216 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 194216' 00:12:05.719 killing process with pid 194216 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 194216 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 194216 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.719 09:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:08.269 00:12:08.269 real 0m29.306s 00:12:08.269 user 2m39.276s 00:12:08.269 sys 0m9.415s 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.269 ************************************ 00:12:08.269 END TEST nvmf_fio_target 00:12:08.269 ************************************ 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:08.269 ************************************ 00:12:08.269 START TEST nvmf_bdevio 00:12:08.269 ************************************ 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:08.269 * Looking for test storage... 00:12:08.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:08.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.269 --rc genhtml_branch_coverage=1 00:12:08.269 --rc genhtml_function_coverage=1 00:12:08.269 --rc genhtml_legend=1 00:12:08.269 --rc geninfo_all_blocks=1 00:12:08.269 --rc geninfo_unexecuted_blocks=1 00:12:08.269 00:12:08.269 ' 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:08.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.269 --rc genhtml_branch_coverage=1 00:12:08.269 --rc genhtml_function_coverage=1 00:12:08.269 --rc genhtml_legend=1 00:12:08.269 --rc geninfo_all_blocks=1 00:12:08.269 --rc geninfo_unexecuted_blocks=1 00:12:08.269 00:12:08.269 ' 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:08.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.269 --rc genhtml_branch_coverage=1 00:12:08.269 --rc genhtml_function_coverage=1 00:12:08.269 --rc genhtml_legend=1 00:12:08.269 --rc geninfo_all_blocks=1 00:12:08.269 --rc geninfo_unexecuted_blocks=1 00:12:08.269 00:12:08.269 ' 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:08.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.269 --rc genhtml_branch_coverage=1 00:12:08.269 --rc genhtml_function_coverage=1 00:12:08.269 --rc genhtml_legend=1 00:12:08.269 --rc geninfo_all_blocks=1 00:12:08.269 --rc geninfo_unexecuted_blocks=1 00:12:08.269 00:12:08.269 ' 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.269 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:08.270 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:16.427 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:16.427 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:16.427 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:16.427 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:16.427 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:16.428 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.428 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:16.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:12:16.428 00:12:16.428 --- 10.0.0.2 ping statistics --- 00:12:16.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.428 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:16.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:12:16.428 00:12:16.428 --- 10.0.0.1 ping statistics --- 00:12:16.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.428 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=203396 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 203396 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 203396 ']' 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.428 [2024-11-19 09:30:02.364170] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:16.428 [2024-11-19 09:30:02.364233] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.428 [2024-11-19 09:30:02.438759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.428 [2024-11-19 09:30:02.486990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.428 [2024-11-19 09:30:02.487041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.428 [2024-11-19 09:30:02.487047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.428 [2024-11-19 09:30:02.487053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.428 [2024-11-19 09:30:02.487059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.428 [2024-11-19 09:30:02.488902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:16.428 [2024-11-19 09:30:02.489063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:16.428 [2024-11-19 09:30:02.489234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.428 [2024-11-19 09:30:02.489234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.428 [2024-11-19 09:30:02.646023] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.428 Malloc0 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.428 [2024-11-19 09:30:02.726350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:16.428 { 00:12:16.428 "params": { 00:12:16.428 "name": "Nvme$subsystem", 00:12:16.428 "trtype": "$TEST_TRANSPORT", 00:12:16.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:16.428 "adrfam": "ipv4", 00:12:16.428 "trsvcid": "$NVMF_PORT", 00:12:16.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:16.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:16.428 "hdgst": ${hdgst:-false}, 00:12:16.428 "ddgst": ${ddgst:-false} 00:12:16.428 }, 00:12:16.428 "method": "bdev_nvme_attach_controller" 00:12:16.428 } 00:12:16.428 EOF 00:12:16.428 )") 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:16.428 09:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:16.428 "params": { 00:12:16.428 "name": "Nvme1", 00:12:16.428 "trtype": "tcp", 00:12:16.428 "traddr": "10.0.0.2", 00:12:16.428 "adrfam": "ipv4", 00:12:16.428 "trsvcid": "4420", 00:12:16.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:16.428 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:16.428 "hdgst": false, 00:12:16.428 "ddgst": false 00:12:16.428 }, 00:12:16.428 "method": "bdev_nvme_attach_controller" 00:12:16.428 }' 00:12:16.428 [2024-11-19 09:30:02.785260] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:16.429 [2024-11-19 09:30:02.785324] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203596 ] 00:12:16.429 [2024-11-19 09:30:02.878443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:16.429 [2024-11-19 09:30:02.935211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.429 [2024-11-19 09:30:02.935465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.429 [2024-11-19 09:30:02.935465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.690 I/O targets: 00:12:16.690 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:16.690 00:12:16.690 00:12:16.690 CUnit - A unit testing framework for C - Version 2.1-3 00:12:16.690 http://cunit.sourceforge.net/ 00:12:16.690 00:12:16.691 00:12:16.691 Suite: bdevio tests on: Nvme1n1 00:12:16.691 Test: blockdev write read block ...passed 00:12:16.691 Test: blockdev write zeroes read block ...passed 00:12:16.691 Test: blockdev write zeroes read no split ...passed 00:12:16.691 Test: blockdev write zeroes read split ...passed 00:12:16.691 Test: blockdev write zeroes read split partial ...passed 00:12:16.691 Test: blockdev reset ...[2024-11-19 09:30:03.409494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:16.691 [2024-11-19 09:30:03.409587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x922970 (9): Bad file descriptor 00:12:16.952 [2024-11-19 09:30:03.517265] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:16.952 passed 00:12:16.952 Test: blockdev write read 8 blocks ...passed 00:12:16.952 Test: blockdev write read size > 128k ...passed 00:12:16.952 Test: blockdev write read invalid size ...passed 00:12:16.952 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:16.952 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:16.952 Test: blockdev write read max offset ...passed 00:12:16.952 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:16.952 Test: blockdev writev readv 8 blocks ...passed 00:12:16.952 Test: blockdev writev readv 30 x 1block ...passed 00:12:17.214 Test: blockdev writev readv block ...passed 00:12:17.214 Test: blockdev writev readv size > 128k ...passed 00:12:17.214 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:17.214 Test: blockdev comparev and writev ...[2024-11-19 09:30:03.745882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.214 [2024-11-19 09:30:03.745930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:17.214 [2024-11-19 09:30:03.745946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.214 [2024-11-19 09:30:03.745955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:17.214 [2024-11-19 09:30:03.746570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.214 [2024-11-19 09:30:03.746583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:17.214 [2024-11-19 09:30:03.746598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.214 [2024-11-19 09:30:03.746606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:17.214 [2024-11-19 09:30:03.747170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.214 [2024-11-19 09:30:03.747182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:17.214 [2024-11-19 09:30:03.747196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.214 [2024-11-19 09:30:03.747204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:17.214 [2024-11-19 09:30:03.747737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.214 [2024-11-19 09:30:03.747748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:17.214 [2024-11-19 09:30:03.747762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.214 [2024-11-19 09:30:03.747770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:17.214 passed 00:12:17.214 Test: blockdev nvme passthru rw ...passed 00:12:17.214 Test: blockdev nvme passthru vendor specific ...[2024-11-19 09:30:03.832056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.214 [2024-11-19 09:30:03.832073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:17.214 [2024-11-19 09:30:03.832486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.214 [2024-11-19 09:30:03.832497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:17.214 [2024-11-19 09:30:03.832886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.214 [2024-11-19 09:30:03.832897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:17.214 [2024-11-19 09:30:03.833291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.214 [2024-11-19 09:30:03.833303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:17.214 passed 00:12:17.214 Test: blockdev nvme admin passthru ...passed 00:12:17.214 Test: blockdev copy ...passed 00:12:17.214 00:12:17.214 Run Summary: Type Total Ran Passed Failed Inactive 00:12:17.214 suites 1 1 n/a 0 0 00:12:17.214 tests 23 23 23 0 0 00:12:17.214 asserts 152 152 152 0 n/a 00:12:17.214 00:12:17.214 Elapsed time = 1.256 seconds 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:17.477 rmmod nvme_tcp 00:12:17.477 rmmod nvme_fabrics 00:12:17.477 rmmod nvme_keyring 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 203396 ']' 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 203396 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 203396 ']' 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 203396 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 203396 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 203396' 00:12:17.477 killing process with pid 203396 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 203396 00:12:17.477 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 203396 00:12:17.739 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:17.739 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:17.739 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:17.739 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:17.739 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:17.739 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:17.739 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:17.739 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:17.739 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:17.739 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.739 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.739 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.290 09:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:20.290 00:12:20.290 real 0m11.901s 00:12:20.290 user 0m12.197s 00:12:20.290 sys 0m6.216s 00:12:20.290 09:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.290 09:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:20.290 ************************************ 00:12:20.290 END TEST nvmf_bdevio 00:12:20.290 ************************************ 00:12:20.290 09:30:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:20.290 00:12:20.290 real 5m3.198s 00:12:20.290 user 11m55.949s 00:12:20.290 sys 1m49.267s 00:12:20.290 09:30:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.290 09:30:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:20.290 ************************************ 00:12:20.290 END TEST nvmf_target_core 00:12:20.290 ************************************ 00:12:20.291 09:30:06 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:20.291 09:30:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:20.291 09:30:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.291 09:30:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:20.291 ************************************ 00:12:20.291 START TEST nvmf_target_extra 00:12:20.291 ************************************ 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:20.291 * Looking for test storage... 00:12:20.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:20.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.291 --rc genhtml_branch_coverage=1 00:12:20.291 --rc genhtml_function_coverage=1 00:12:20.291 --rc genhtml_legend=1 00:12:20.291 --rc geninfo_all_blocks=1 00:12:20.291 --rc geninfo_unexecuted_blocks=1 00:12:20.291 00:12:20.291 ' 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:20.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.291 --rc genhtml_branch_coverage=1 00:12:20.291 --rc genhtml_function_coverage=1 00:12:20.291 --rc genhtml_legend=1 00:12:20.291 --rc geninfo_all_blocks=1 00:12:20.291 --rc geninfo_unexecuted_blocks=1 00:12:20.291 00:12:20.291 ' 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:20.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.291 --rc genhtml_branch_coverage=1 00:12:20.291 --rc genhtml_function_coverage=1 00:12:20.291 --rc genhtml_legend=1 00:12:20.291 --rc geninfo_all_blocks=1 00:12:20.291 --rc geninfo_unexecuted_blocks=1 00:12:20.291 00:12:20.291 ' 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:20.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.291 --rc genhtml_branch_coverage=1 00:12:20.291 --rc genhtml_function_coverage=1 00:12:20.291 --rc genhtml_legend=1 00:12:20.291 --rc geninfo_all_blocks=1 00:12:20.291 --rc geninfo_unexecuted_blocks=1 00:12:20.291 00:12:20.291 ' 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:20.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:20.291 09:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:20.292 09:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:20.292 09:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.292 09:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:20.292 ************************************ 00:12:20.292 START TEST nvmf_example 00:12:20.292 ************************************ 00:12:20.292 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:20.292 * Looking for test storage... 00:12:20.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.292 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:20.292 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:12:20.292 09:30:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:20.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.554 --rc genhtml_branch_coverage=1 00:12:20.554 --rc genhtml_function_coverage=1 00:12:20.554 --rc genhtml_legend=1 00:12:20.554 --rc geninfo_all_blocks=1 00:12:20.554 --rc geninfo_unexecuted_blocks=1 00:12:20.554 00:12:20.554 ' 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:20.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.554 --rc genhtml_branch_coverage=1 00:12:20.554 --rc genhtml_function_coverage=1 00:12:20.554 --rc genhtml_legend=1 00:12:20.554 --rc geninfo_all_blocks=1 00:12:20.554 --rc geninfo_unexecuted_blocks=1 00:12:20.554 00:12:20.554 ' 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:20.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.554 --rc genhtml_branch_coverage=1 00:12:20.554 --rc genhtml_function_coverage=1 00:12:20.554 --rc genhtml_legend=1 00:12:20.554 --rc geninfo_all_blocks=1 00:12:20.554 --rc geninfo_unexecuted_blocks=1 00:12:20.554 00:12:20.554 ' 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:20.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.554 --rc genhtml_branch_coverage=1 00:12:20.554 --rc genhtml_function_coverage=1 00:12:20.554 --rc genhtml_legend=1 00:12:20.554 --rc geninfo_all_blocks=1 00:12:20.554 --rc geninfo_unexecuted_blocks=1 00:12:20.554 00:12:20.554 ' 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.554 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:20.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:20.555 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.717 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:28.718 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:28.718 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:28.718 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:28.718 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:28.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:12:28.718 00:12:28.718 --- 10.0.0.2 ping statistics --- 00:12:28.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.718 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:12:28.718 00:12:28.718 --- 10.0.0.1 ping statistics --- 00:12:28.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.718 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=208605 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:28.718 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:28.719 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 208605 00:12:28.719 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 208605 ']' 00:12:28.719 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.719 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.719 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.719 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.719 09:30:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:28.980 09:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:41.219 Initializing NVMe Controllers 00:12:41.219 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:41.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:41.219 Initialization complete. Launching workers. 00:12:41.219 ======================================================== 00:12:41.219 Latency(us) 00:12:41.219 Device Information : IOPS MiB/s Average min max 00:12:41.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18907.70 73.86 3384.35 560.12 18000.46 00:12:41.219 ======================================================== 00:12:41.219 Total : 18907.70 73.86 3384.35 560.12 18000.46 00:12:41.219 00:12:41.219 09:30:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:41.219 09:30:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:41.219 09:30:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:41.219 09:30:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:41.219 09:30:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:41.219 09:30:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:41.219 09:30:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:41.219 09:30:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:41.219 rmmod nvme_tcp 00:12:41.219 rmmod nvme_fabrics 00:12:41.219 rmmod nvme_keyring 00:12:41.219 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:41.219 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 208605 ']' 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 208605 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 208605 ']' 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 208605 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 208605 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 208605' 00:12:41.220 killing process with pid 208605 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 208605 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 208605 00:12:41.220 nvmf threads initialize successfully 00:12:41.220 bdev subsystem init successfully 00:12:41.220 created a nvmf target service 00:12:41.220 create targets's poll groups done 00:12:41.220 all subsystems of target started 00:12:41.220 nvmf target is running 00:12:41.220 all subsystems of target stopped 00:12:41.220 destroy targets's poll groups done 00:12:41.220 destroyed the nvmf target service 00:12:41.220 bdev subsystem finish successfully 00:12:41.220 nvmf threads destroy successfully 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.220 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.794 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:41.794 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:41.794 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:41.794 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:41.794 00:12:41.794 real 0m21.472s 00:12:41.794 user 0m47.042s 00:12:41.794 sys 0m6.964s 00:12:41.794 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.794 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:41.794 ************************************ 00:12:41.794 END TEST nvmf_example 00:12:41.794 ************************************ 00:12:41.794 09:30:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:41.794 09:30:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:41.794 09:30:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.794 09:30:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:41.794 ************************************ 00:12:41.794 START TEST nvmf_filesystem 00:12:41.794 ************************************ 00:12:41.794 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:41.794 * Looking for test storage... 00:12:41.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:41.794 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:41.794 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:41.794 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:42.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.061 --rc genhtml_branch_coverage=1 00:12:42.061 --rc genhtml_function_coverage=1 00:12:42.061 --rc genhtml_legend=1 00:12:42.061 --rc geninfo_all_blocks=1 00:12:42.061 --rc geninfo_unexecuted_blocks=1 00:12:42.061 00:12:42.061 ' 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:42.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.061 --rc genhtml_branch_coverage=1 00:12:42.061 --rc genhtml_function_coverage=1 00:12:42.061 --rc genhtml_legend=1 00:12:42.061 --rc geninfo_all_blocks=1 00:12:42.061 --rc geninfo_unexecuted_blocks=1 00:12:42.061 00:12:42.061 ' 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:42.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.061 --rc genhtml_branch_coverage=1 00:12:42.061 --rc genhtml_function_coverage=1 00:12:42.061 --rc genhtml_legend=1 00:12:42.061 --rc geninfo_all_blocks=1 00:12:42.061 --rc geninfo_unexecuted_blocks=1 00:12:42.061 00:12:42.061 ' 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:42.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.061 --rc genhtml_branch_coverage=1 00:12:42.061 --rc genhtml_function_coverage=1 00:12:42.061 --rc genhtml_legend=1 00:12:42.061 --rc geninfo_all_blocks=1 00:12:42.061 --rc geninfo_unexecuted_blocks=1 00:12:42.061 00:12:42.061 ' 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:42.061 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:42.062 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:42.062 #define SPDK_CONFIG_H 00:12:42.062 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:42.062 #define SPDK_CONFIG_APPS 1 00:12:42.062 #define SPDK_CONFIG_ARCH native 00:12:42.062 #undef SPDK_CONFIG_ASAN 00:12:42.062 #undef SPDK_CONFIG_AVAHI 00:12:42.062 #undef SPDK_CONFIG_CET 00:12:42.062 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:42.062 #define SPDK_CONFIG_COVERAGE 1 00:12:42.062 #define SPDK_CONFIG_CROSS_PREFIX 00:12:42.062 #undef SPDK_CONFIG_CRYPTO 00:12:42.062 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:42.062 #undef SPDK_CONFIG_CUSTOMOCF 00:12:42.062 #undef SPDK_CONFIG_DAOS 00:12:42.062 #define SPDK_CONFIG_DAOS_DIR 00:12:42.062 #define SPDK_CONFIG_DEBUG 1 00:12:42.062 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:42.062 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:42.062 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:42.062 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:42.062 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:42.062 #undef SPDK_CONFIG_DPDK_UADK 00:12:42.063 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:42.063 #define SPDK_CONFIG_EXAMPLES 1 00:12:42.063 #undef SPDK_CONFIG_FC 00:12:42.063 #define SPDK_CONFIG_FC_PATH 00:12:42.063 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:42.063 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:42.063 #define SPDK_CONFIG_FSDEV 1 00:12:42.063 #undef SPDK_CONFIG_FUSE 00:12:42.063 #undef SPDK_CONFIG_FUZZER 00:12:42.063 #define SPDK_CONFIG_FUZZER_LIB 00:12:42.063 #undef SPDK_CONFIG_GOLANG 00:12:42.063 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:42.063 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:42.063 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:42.063 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:42.063 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:42.063 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:42.063 #undef SPDK_CONFIG_HAVE_LZ4 00:12:42.063 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:42.063 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:42.063 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:42.063 #define SPDK_CONFIG_IDXD 1 00:12:42.063 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:42.063 #undef SPDK_CONFIG_IPSEC_MB 00:12:42.063 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:42.063 #define SPDK_CONFIG_ISAL 1 00:12:42.063 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:42.063 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:42.063 #define SPDK_CONFIG_LIBDIR 00:12:42.063 #undef SPDK_CONFIG_LTO 00:12:42.063 #define SPDK_CONFIG_MAX_LCORES 128 00:12:42.063 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:42.063 #define SPDK_CONFIG_NVME_CUSE 1 00:12:42.063 #undef SPDK_CONFIG_OCF 00:12:42.063 #define SPDK_CONFIG_OCF_PATH 00:12:42.063 #define SPDK_CONFIG_OPENSSL_PATH 00:12:42.063 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:42.063 #define SPDK_CONFIG_PGO_DIR 00:12:42.063 #undef SPDK_CONFIG_PGO_USE 00:12:42.063 #define SPDK_CONFIG_PREFIX /usr/local 00:12:42.063 #undef SPDK_CONFIG_RAID5F 00:12:42.063 #undef SPDK_CONFIG_RBD 00:12:42.063 #define SPDK_CONFIG_RDMA 1 00:12:42.063 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:42.063 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:42.063 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:42.063 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:42.063 #define SPDK_CONFIG_SHARED 1 00:12:42.063 #undef SPDK_CONFIG_SMA 00:12:42.063 #define SPDK_CONFIG_TESTS 1 00:12:42.063 #undef SPDK_CONFIG_TSAN 00:12:42.063 #define SPDK_CONFIG_UBLK 1 00:12:42.063 #define SPDK_CONFIG_UBSAN 1 00:12:42.063 #undef SPDK_CONFIG_UNIT_TESTS 00:12:42.063 #undef SPDK_CONFIG_URING 00:12:42.063 #define SPDK_CONFIG_URING_PATH 00:12:42.063 #undef SPDK_CONFIG_URING_ZNS 00:12:42.063 #undef SPDK_CONFIG_USDT 00:12:42.063 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:42.063 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:42.063 #define SPDK_CONFIG_VFIO_USER 1 00:12:42.063 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:42.063 #define SPDK_CONFIG_VHOST 1 00:12:42.063 #define SPDK_CONFIG_VIRTIO 1 00:12:42.063 #undef SPDK_CONFIG_VTUNE 00:12:42.063 #define SPDK_CONFIG_VTUNE_DIR 00:12:42.063 #define SPDK_CONFIG_WERROR 1 00:12:42.063 #define SPDK_CONFIG_WPDK_DIR 00:12:42.063 #undef SPDK_CONFIG_XNVME 00:12:42.063 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:42.063 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:42.064 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:42.065 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 211424 ]] 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 211424 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.pEPpTI 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.pEPpTI/tests/target /tmp/spdk.pEPpTI 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=123971637248 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5384871936 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64668221440 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847947264 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23355392 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64678084608 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=172032 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:42.066 * Looking for test storage... 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=123971637248 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=7599464448 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:42.066 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:42.067 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:42.067 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:42.067 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:42.067 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:42.067 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:42.067 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:42.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.330 --rc genhtml_branch_coverage=1 00:12:42.330 --rc genhtml_function_coverage=1 00:12:42.330 --rc genhtml_legend=1 00:12:42.330 --rc geninfo_all_blocks=1 00:12:42.330 --rc geninfo_unexecuted_blocks=1 00:12:42.330 00:12:42.330 ' 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:42.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.330 --rc genhtml_branch_coverage=1 00:12:42.330 --rc genhtml_function_coverage=1 00:12:42.330 --rc genhtml_legend=1 00:12:42.330 --rc geninfo_all_blocks=1 00:12:42.330 --rc geninfo_unexecuted_blocks=1 00:12:42.330 00:12:42.330 ' 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:42.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.330 --rc genhtml_branch_coverage=1 00:12:42.330 --rc genhtml_function_coverage=1 00:12:42.330 --rc genhtml_legend=1 00:12:42.330 --rc geninfo_all_blocks=1 00:12:42.330 --rc geninfo_unexecuted_blocks=1 00:12:42.330 00:12:42.330 ' 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:42.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.330 --rc genhtml_branch_coverage=1 00:12:42.330 --rc genhtml_function_coverage=1 00:12:42.330 --rc genhtml_legend=1 00:12:42.330 --rc geninfo_all_blocks=1 00:12:42.330 --rc geninfo_unexecuted_blocks=1 00:12:42.330 00:12:42.330 ' 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:42.330 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:42.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:42.331 09:30:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:50.483 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:50.483 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.483 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:50.484 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:50.484 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:50.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:12:50.484 00:12:50.484 --- 10.0.0.2 ping statistics --- 00:12:50.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.484 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:50.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:12:50.484 00:12:50.484 --- 10.0.0.1 ping statistics --- 00:12:50.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.484 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:50.484 ************************************ 00:12:50.484 START TEST nvmf_filesystem_no_in_capsule 00:12:50.484 ************************************ 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=215345 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 215345 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 215345 ']' 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.484 [2024-11-19 09:30:36.545180] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:50.484 [2024-11-19 09:30:36.545241] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.484 [2024-11-19 09:30:36.626145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.484 [2024-11-19 09:30:36.673565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.484 [2024-11-19 09:30:36.673619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.484 [2024-11-19 09:30:36.673626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.484 [2024-11-19 09:30:36.673632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.484 [2024-11-19 09:30:36.673637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.484 [2024-11-19 09:30:36.675412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.484 [2024-11-19 09:30:36.675571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.484 [2024-11-19 09:30:36.675735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.484 [2024-11-19 09:30:36.675735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.484 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.485 [2024-11-19 09:30:36.837237] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.485 Malloc1 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.485 [2024-11-19 09:30:36.989232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.485 09:30:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.485 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.485 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:50.485 { 00:12:50.485 "name": "Malloc1", 00:12:50.485 "aliases": [ 00:12:50.485 "ad0f3317-2597-421a-82f6-30874624c8da" 00:12:50.485 ], 00:12:50.485 "product_name": "Malloc disk", 00:12:50.485 "block_size": 512, 00:12:50.485 "num_blocks": 1048576, 00:12:50.485 "uuid": "ad0f3317-2597-421a-82f6-30874624c8da", 00:12:50.485 "assigned_rate_limits": { 00:12:50.485 "rw_ios_per_sec": 0, 00:12:50.485 "rw_mbytes_per_sec": 0, 00:12:50.485 "r_mbytes_per_sec": 0, 00:12:50.485 "w_mbytes_per_sec": 0 00:12:50.485 }, 00:12:50.485 "claimed": true, 00:12:50.485 "claim_type": "exclusive_write", 00:12:50.485 "zoned": false, 00:12:50.485 "supported_io_types": { 00:12:50.485 "read": true, 00:12:50.485 "write": true, 00:12:50.485 "unmap": true, 00:12:50.485 "flush": true, 00:12:50.485 "reset": true, 00:12:50.485 "nvme_admin": false, 00:12:50.485 "nvme_io": false, 00:12:50.485 "nvme_io_md": false, 00:12:50.485 "write_zeroes": true, 00:12:50.485 "zcopy": true, 00:12:50.485 "get_zone_info": false, 00:12:50.485 "zone_management": false, 00:12:50.485 "zone_append": false, 00:12:50.485 "compare": false, 00:12:50.485 "compare_and_write": false, 00:12:50.485 "abort": true, 00:12:50.485 "seek_hole": false, 00:12:50.485 "seek_data": false, 00:12:50.485 "copy": true, 00:12:50.485 "nvme_iov_md": false 00:12:50.485 }, 00:12:50.485 "memory_domains": [ 00:12:50.485 { 00:12:50.485 "dma_device_id": "system", 00:12:50.485 "dma_device_type": 1 00:12:50.485 }, 00:12:50.485 { 00:12:50.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.485 "dma_device_type": 2 00:12:50.485 } 00:12:50.485 ], 00:12:50.485 "driver_specific": {} 00:12:50.485 } 00:12:50.485 ]' 00:12:50.485 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:50.485 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:50.485 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:50.485 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:50.485 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:50.485 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:50.485 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:50.485 09:30:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.404 09:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.404 09:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:52.404 09:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.404 09:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:52.404 09:30:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:54.324 09:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:54.897 09:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:55.839 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:55.839 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:55.839 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:55.839 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.839 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:55.839 ************************************ 00:12:55.839 START TEST filesystem_ext4 00:12:55.839 ************************************ 00:12:55.839 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:55.839 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:55.839 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:55.839 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:55.839 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:55.839 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:55.839 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:55.839 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:55.839 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:55.839 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:55.839 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:55.839 mke2fs 1.47.0 (5-Feb-2023) 00:12:56.101 Discarding device blocks: 0/522240 done 00:12:56.101 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:56.101 Filesystem UUID: 5bd82c5b-8a1c-4dee-9999-2ed3a24177f8 00:12:56.101 Superblock backups stored on blocks: 00:12:56.101 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:56.101 00:12:56.101 Allocating group tables: 0/64 done 00:12:56.101 Writing inode tables: 0/64 done 00:12:56.101 Creating journal (8192 blocks): done 00:12:56.101 Writing superblocks and filesystem accounting information: 0/64 done 00:12:56.101 00:12:56.101 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:56.101 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:02.690 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 215345 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:02.691 00:13:02.691 real 0m5.902s 00:13:02.691 user 0m0.019s 00:13:02.691 sys 0m0.135s 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:02.691 ************************************ 00:13:02.691 END TEST filesystem_ext4 00:13:02.691 ************************************ 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.691 ************************************ 00:13:02.691 START TEST filesystem_btrfs 00:13:02.691 ************************************ 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:02.691 btrfs-progs v6.8.1 00:13:02.691 See https://btrfs.readthedocs.io for more information. 00:13:02.691 00:13:02.691 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:02.691 NOTE: several default settings have changed in version 5.15, please make sure 00:13:02.691 this does not affect your deployments: 00:13:02.691 - DUP for metadata (-m dup) 00:13:02.691 - enabled no-holes (-O no-holes) 00:13:02.691 - enabled free-space-tree (-R free-space-tree) 00:13:02.691 00:13:02.691 Label: (null) 00:13:02.691 UUID: 5fd4ec54-2fb5-4949-8e48-e8664f79c824 00:13:02.691 Node size: 16384 00:13:02.691 Sector size: 4096 (CPU page size: 4096) 00:13:02.691 Filesystem size: 510.00MiB 00:13:02.691 Block group profiles: 00:13:02.691 Data: single 8.00MiB 00:13:02.691 Metadata: DUP 32.00MiB 00:13:02.691 System: DUP 8.00MiB 00:13:02.691 SSD detected: yes 00:13:02.691 Zoned device: no 00:13:02.691 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:02.691 Checksum: crc32c 00:13:02.691 Number of devices: 1 00:13:02.691 Devices: 00:13:02.691 ID SIZE PATH 00:13:02.691 1 510.00MiB /dev/nvme0n1p1 00:13:02.691 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:02.691 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:02.691 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:02.691 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:02.691 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:02.691 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:02.691 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:02.691 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:02.691 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 215345 00:13:02.691 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:02.691 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:02.691 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:02.691 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:02.691 00:13:02.691 real 0m0.858s 00:13:02.691 user 0m0.025s 00:13:02.691 sys 0m0.175s 00:13:02.691 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.691 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:02.691 ************************************ 00:13:02.691 END TEST filesystem_btrfs 00:13:02.691 ************************************ 00:13:02.691 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:02.691 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:02.691 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.691 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.951 ************************************ 00:13:02.952 START TEST filesystem_xfs 00:13:02.952 ************************************ 00:13:02.952 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:02.952 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:02.952 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:02.952 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:02.952 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:02.952 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:02.952 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:02.952 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:13:02.952 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:02.952 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:02.952 09:30:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:02.952 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:02.952 = sectsz=512 attr=2, projid32bit=1 00:13:02.952 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:02.952 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:02.952 data = bsize=4096 blocks=130560, imaxpct=25 00:13:02.952 = sunit=0 swidth=0 blks 00:13:02.952 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:02.952 log =internal log bsize=4096 blocks=16384, version=2 00:13:02.952 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:02.952 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:03.893 Discarding blocks...Done. 00:13:03.893 09:30:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:03.893 09:30:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 215345 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:07.195 00:13:07.195 real 0m3.838s 00:13:07.195 user 0m0.027s 00:13:07.195 sys 0m0.131s 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:07.195 ************************************ 00:13:07.195 END TEST filesystem_xfs 00:13:07.195 ************************************ 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 215345 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 215345 ']' 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 215345 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 215345 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 215345' 00:13:07.195 killing process with pid 215345 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 215345 00:13:07.195 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 215345 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:07.457 00:13:07.457 real 0m17.590s 00:13:07.457 user 1m9.360s 00:13:07.457 sys 0m1.581s 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.457 ************************************ 00:13:07.457 END TEST nvmf_filesystem_no_in_capsule 00:13:07.457 ************************************ 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:07.457 ************************************ 00:13:07.457 START TEST nvmf_filesystem_in_capsule 00:13:07.457 ************************************ 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=218934 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 218934 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 218934 ']' 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.718 [2024-11-19 09:30:54.213550] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:07.718 [2024-11-19 09:30:54.213596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.718 [2024-11-19 09:30:54.304927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.718 [2024-11-19 09:30:54.336010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.718 [2024-11-19 09:30:54.336041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.718 [2024-11-19 09:30:54.336047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.718 [2024-11-19 09:30:54.336051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.718 [2024-11-19 09:30:54.336056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.718 [2024-11-19 09:30:54.337282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.718 [2024-11-19 09:30:54.337432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.718 [2024-11-19 09:30:54.337578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.718 [2024-11-19 09:30:54.337581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.298 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.298 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:08.298 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:08.298 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:08.299 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.563 [2024-11-19 09:30:55.058151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.563 Malloc1 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.563 [2024-11-19 09:30:55.181790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:08.563 { 00:13:08.563 "name": "Malloc1", 00:13:08.563 "aliases": [ 00:13:08.563 "2a39b1e0-2409-4ab7-8be6-8d6e11ca011d" 00:13:08.563 ], 00:13:08.563 "product_name": "Malloc disk", 00:13:08.563 "block_size": 512, 00:13:08.563 "num_blocks": 1048576, 00:13:08.563 "uuid": "2a39b1e0-2409-4ab7-8be6-8d6e11ca011d", 00:13:08.563 "assigned_rate_limits": { 00:13:08.563 "rw_ios_per_sec": 0, 00:13:08.563 "rw_mbytes_per_sec": 0, 00:13:08.563 "r_mbytes_per_sec": 0, 00:13:08.563 "w_mbytes_per_sec": 0 00:13:08.563 }, 00:13:08.563 "claimed": true, 00:13:08.563 "claim_type": "exclusive_write", 00:13:08.563 "zoned": false, 00:13:08.563 "supported_io_types": { 00:13:08.563 "read": true, 00:13:08.563 "write": true, 00:13:08.563 "unmap": true, 00:13:08.563 "flush": true, 00:13:08.563 "reset": true, 00:13:08.563 "nvme_admin": false, 00:13:08.563 "nvme_io": false, 00:13:08.563 "nvme_io_md": false, 00:13:08.563 "write_zeroes": true, 00:13:08.563 "zcopy": true, 00:13:08.563 "get_zone_info": false, 00:13:08.563 "zone_management": false, 00:13:08.563 "zone_append": false, 00:13:08.563 "compare": false, 00:13:08.563 "compare_and_write": false, 00:13:08.563 "abort": true, 00:13:08.563 "seek_hole": false, 00:13:08.563 "seek_data": false, 00:13:08.563 "copy": true, 00:13:08.563 "nvme_iov_md": false 00:13:08.563 }, 00:13:08.563 "memory_domains": [ 00:13:08.563 { 00:13:08.563 "dma_device_id": "system", 00:13:08.563 "dma_device_type": 1 00:13:08.563 }, 00:13:08.563 { 00:13:08.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.563 "dma_device_type": 2 00:13:08.563 } 00:13:08.563 ], 00:13:08.563 "driver_specific": {} 00:13:08.563 } 00:13:08.563 ]' 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:08.563 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:08.824 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:08.824 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:08.824 09:30:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.209 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.209 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:10.209 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.209 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:10.209 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:12.123 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:12.123 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:12.123 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.123 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:12.123 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.123 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:12.123 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:12.123 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:12.123 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:12.123 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:12.123 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:12.123 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:12.123 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:12.123 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:12.123 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:12.123 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:12.123 09:30:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:12.384 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:12.964 09:30:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:13.913 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:13.913 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:13.913 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:13.913 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.913 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:13.913 ************************************ 00:13:13.913 START TEST filesystem_in_capsule_ext4 00:13:13.913 ************************************ 00:13:13.913 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:13.913 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:13.913 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:13.913 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:13.913 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:13.913 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:13.913 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:13.913 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:13.913 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:13.913 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:13.913 09:31:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:13.913 mke2fs 1.47.0 (5-Feb-2023) 00:13:13.913 Discarding device blocks: 0/522240 done 00:13:13.913 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:13.913 Filesystem UUID: b6c92116-0df0-4cee-bc4e-3329c7abf8ad 00:13:13.913 Superblock backups stored on blocks: 00:13:13.913 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:13.913 00:13:13.913 Allocating group tables: 0/64 done 00:13:13.913 Writing inode tables: 0/64 done 00:13:14.175 Creating journal (8192 blocks): done 00:13:16.505 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:13:16.505 00:13:16.505 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:16.505 09:31:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:21.798 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:21.798 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:21.798 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:21.798 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 218934 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:22.060 00:13:22.060 real 0m8.177s 00:13:22.060 user 0m0.033s 00:13:22.060 sys 0m0.077s 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:22.060 ************************************ 00:13:22.060 END TEST filesystem_in_capsule_ext4 00:13:22.060 ************************************ 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.060 ************************************ 00:13:22.060 START TEST filesystem_in_capsule_btrfs 00:13:22.060 ************************************ 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:22.060 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:22.321 btrfs-progs v6.8.1 00:13:22.321 See https://btrfs.readthedocs.io for more information. 00:13:22.321 00:13:22.321 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:22.321 NOTE: several default settings have changed in version 5.15, please make sure 00:13:22.321 this does not affect your deployments: 00:13:22.321 - DUP for metadata (-m dup) 00:13:22.321 - enabled no-holes (-O no-holes) 00:13:22.321 - enabled free-space-tree (-R free-space-tree) 00:13:22.321 00:13:22.321 Label: (null) 00:13:22.321 UUID: fd988956-bc5a-4f0d-9f76-63fa5826e828 00:13:22.321 Node size: 16384 00:13:22.321 Sector size: 4096 (CPU page size: 4096) 00:13:22.321 Filesystem size: 510.00MiB 00:13:22.321 Block group profiles: 00:13:22.321 Data: single 8.00MiB 00:13:22.321 Metadata: DUP 32.00MiB 00:13:22.321 System: DUP 8.00MiB 00:13:22.321 SSD detected: yes 00:13:22.321 Zoned device: no 00:13:22.321 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:22.321 Checksum: crc32c 00:13:22.321 Number of devices: 1 00:13:22.321 Devices: 00:13:22.321 ID SIZE PATH 00:13:22.321 1 510.00MiB /dev/nvme0n1p1 00:13:22.321 00:13:22.322 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:22.322 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 218934 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:22.896 00:13:22.896 real 0m0.804s 00:13:22.896 user 0m0.021s 00:13:22.896 sys 0m0.123s 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:22.896 ************************************ 00:13:22.896 END TEST filesystem_in_capsule_btrfs 00:13:22.896 ************************************ 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.896 ************************************ 00:13:22.896 START TEST filesystem_in_capsule_xfs 00:13:22.896 ************************************ 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:22.896 09:31:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:23.157 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:23.157 = sectsz=512 attr=2, projid32bit=1 00:13:23.157 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:23.157 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:23.157 data = bsize=4096 blocks=130560, imaxpct=25 00:13:23.157 = sunit=0 swidth=0 blks 00:13:23.157 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:23.157 log =internal log bsize=4096 blocks=16384, version=2 00:13:23.157 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:23.157 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:24.102 Discarding blocks...Done. 00:13:24.102 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:24.102 09:31:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:26.649 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:26.649 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:26.649 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:26.649 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:26.649 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:26.649 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:26.649 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 218934 00:13:26.649 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:26.649 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:26.649 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:26.649 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:26.649 00:13:26.649 real 0m3.526s 00:13:26.649 user 0m0.025s 00:13:26.649 sys 0m0.081s 00:13:26.649 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.649 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:26.649 ************************************ 00:13:26.649 END TEST filesystem_in_capsule_xfs 00:13:26.649 ************************************ 00:13:26.649 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:26.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 218934 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 218934 ']' 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 218934 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 218934 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 218934' 00:13:26.911 killing process with pid 218934 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 218934 00:13:26.911 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 218934 00:13:27.173 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:27.173 00:13:27.173 real 0m19.696s 00:13:27.173 user 1m17.917s 00:13:27.173 sys 0m1.457s 00:13:27.173 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.173 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:27.173 ************************************ 00:13:27.173 END TEST nvmf_filesystem_in_capsule 00:13:27.173 ************************************ 00:13:27.173 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:27.173 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:27.173 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:27.173 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:27.173 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:27.173 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:27.173 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:27.173 rmmod nvme_tcp 00:13:27.435 rmmod nvme_fabrics 00:13:27.435 rmmod nvme_keyring 00:13:27.435 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:27.435 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:27.435 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:27.435 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:27.435 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:27.435 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:27.435 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:27.435 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:27.435 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:27.435 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:27.435 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:27.435 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:27.435 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:27.435 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.435 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.435 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.351 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:29.351 00:13:29.351 real 0m47.659s 00:13:29.351 user 2m29.702s 00:13:29.351 sys 0m8.932s 00:13:29.351 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.351 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:29.351 ************************************ 00:13:29.351 END TEST nvmf_filesystem 00:13:29.351 ************************************ 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:29.612 ************************************ 00:13:29.612 START TEST nvmf_target_discovery 00:13:29.612 ************************************ 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:29.612 * Looking for test storage... 00:13:29.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:29.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.612 --rc genhtml_branch_coverage=1 00:13:29.612 --rc genhtml_function_coverage=1 00:13:29.612 --rc genhtml_legend=1 00:13:29.612 --rc geninfo_all_blocks=1 00:13:29.612 --rc geninfo_unexecuted_blocks=1 00:13:29.612 00:13:29.612 ' 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:29.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.612 --rc genhtml_branch_coverage=1 00:13:29.612 --rc genhtml_function_coverage=1 00:13:29.612 --rc genhtml_legend=1 00:13:29.612 --rc geninfo_all_blocks=1 00:13:29.612 --rc geninfo_unexecuted_blocks=1 00:13:29.612 00:13:29.612 ' 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:29.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.612 --rc genhtml_branch_coverage=1 00:13:29.612 --rc genhtml_function_coverage=1 00:13:29.612 --rc genhtml_legend=1 00:13:29.612 --rc geninfo_all_blocks=1 00:13:29.612 --rc geninfo_unexecuted_blocks=1 00:13:29.612 00:13:29.612 ' 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:29.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.612 --rc genhtml_branch_coverage=1 00:13:29.612 --rc genhtml_function_coverage=1 00:13:29.612 --rc genhtml_legend=1 00:13:29.612 --rc geninfo_all_blocks=1 00:13:29.612 --rc geninfo_unexecuted_blocks=1 00:13:29.612 00:13:29.612 ' 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:29.612 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.613 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.613 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.613 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.613 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:29.613 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:29.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:29.875 09:31:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:38.035 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:38.035 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:38.035 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:38.036 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:38.036 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:38.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:38.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:13:38.036 00:13:38.036 --- 10.0.0.2 ping statistics --- 00:13:38.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.036 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:38.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:38.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:13:38.036 00:13:38.036 --- 10.0.0.1 ping statistics --- 00:13:38.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.036 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=227176 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 227176 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 227176 ']' 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.036 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.036 [2024-11-19 09:31:23.859124] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:38.036 [2024-11-19 09:31:23.859220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.036 [2024-11-19 09:31:23.960118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:38.036 [2024-11-19 09:31:24.012558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.036 [2024-11-19 09:31:24.012610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.036 [2024-11-19 09:31:24.012618] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.036 [2024-11-19 09:31:24.012626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.036 [2024-11-19 09:31:24.012632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.036 [2024-11-19 09:31:24.014685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.036 [2024-11-19 09:31:24.014844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.036 [2024-11-19 09:31:24.015005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.036 [2024-11-19 09:31:24.015005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.036 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.036 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:38.036 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:38.036 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:38.036 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.036 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.036 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:38.036 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.036 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.036 [2024-11-19 09:31:24.735026] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.036 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.036 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:38.036 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:38.036 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:38.036 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.037 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.037 Null1 00:13:38.037 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.037 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:38.037 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.037 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.299 [2024-11-19 09:31:24.807516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.299 Null2 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.299 Null3 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.299 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.300 Null4 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.300 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:13:38.562 00:13:38.562 Discovery Log Number of Records 6, Generation counter 6 00:13:38.562 =====Discovery Log Entry 0====== 00:13:38.562 trtype: tcp 00:13:38.562 adrfam: ipv4 00:13:38.562 subtype: current discovery subsystem 00:13:38.562 treq: not required 00:13:38.562 portid: 0 00:13:38.562 trsvcid: 4420 00:13:38.562 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:38.562 traddr: 10.0.0.2 00:13:38.562 eflags: explicit discovery connections, duplicate discovery information 00:13:38.562 sectype: none 00:13:38.562 =====Discovery Log Entry 1====== 00:13:38.562 trtype: tcp 00:13:38.562 adrfam: ipv4 00:13:38.562 subtype: nvme subsystem 00:13:38.562 treq: not required 00:13:38.562 portid: 0 00:13:38.562 trsvcid: 4420 00:13:38.562 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:38.562 traddr: 10.0.0.2 00:13:38.562 eflags: none 00:13:38.562 sectype: none 00:13:38.562 =====Discovery Log Entry 2====== 00:13:38.562 trtype: tcp 00:13:38.562 adrfam: ipv4 00:13:38.562 subtype: nvme subsystem 00:13:38.562 treq: not required 00:13:38.562 portid: 0 00:13:38.562 trsvcid: 4420 00:13:38.562 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:38.562 traddr: 10.0.0.2 00:13:38.562 eflags: none 00:13:38.562 sectype: none 00:13:38.562 =====Discovery Log Entry 3====== 00:13:38.562 trtype: tcp 00:13:38.562 adrfam: ipv4 00:13:38.562 subtype: nvme subsystem 00:13:38.562 treq: not required 00:13:38.562 portid: 0 00:13:38.562 trsvcid: 4420 00:13:38.562 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:38.562 traddr: 10.0.0.2 00:13:38.562 eflags: none 00:13:38.562 sectype: none 00:13:38.562 =====Discovery Log Entry 4====== 00:13:38.562 trtype: tcp 00:13:38.562 adrfam: ipv4 00:13:38.562 subtype: nvme subsystem 00:13:38.562 treq: not required 00:13:38.562 portid: 0 00:13:38.562 trsvcid: 4420 00:13:38.562 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:38.562 traddr: 10.0.0.2 00:13:38.562 eflags: none 00:13:38.562 sectype: none 00:13:38.562 =====Discovery Log Entry 5====== 00:13:38.562 trtype: tcp 00:13:38.562 adrfam: ipv4 00:13:38.562 subtype: discovery subsystem referral 00:13:38.562 treq: not required 00:13:38.562 portid: 0 00:13:38.562 trsvcid: 4430 00:13:38.562 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:38.562 traddr: 10.0.0.2 00:13:38.562 eflags: none 00:13:38.562 sectype: none 00:13:38.562 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:38.563 Perform nvmf subsystem discovery via RPC 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.563 [ 00:13:38.563 { 00:13:38.563 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:38.563 "subtype": "Discovery", 00:13:38.563 "listen_addresses": [ 00:13:38.563 { 00:13:38.563 "trtype": "TCP", 00:13:38.563 "adrfam": "IPv4", 00:13:38.563 "traddr": "10.0.0.2", 00:13:38.563 "trsvcid": "4420" 00:13:38.563 } 00:13:38.563 ], 00:13:38.563 "allow_any_host": true, 00:13:38.563 "hosts": [] 00:13:38.563 }, 00:13:38.563 { 00:13:38.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.563 "subtype": "NVMe", 00:13:38.563 "listen_addresses": [ 00:13:38.563 { 00:13:38.563 "trtype": "TCP", 00:13:38.563 "adrfam": "IPv4", 00:13:38.563 "traddr": "10.0.0.2", 00:13:38.563 "trsvcid": "4420" 00:13:38.563 } 00:13:38.563 ], 00:13:38.563 "allow_any_host": true, 00:13:38.563 "hosts": [], 00:13:38.563 "serial_number": "SPDK00000000000001", 00:13:38.563 "model_number": "SPDK bdev Controller", 00:13:38.563 "max_namespaces": 32, 00:13:38.563 "min_cntlid": 1, 00:13:38.563 "max_cntlid": 65519, 00:13:38.563 "namespaces": [ 00:13:38.563 { 00:13:38.563 "nsid": 1, 00:13:38.563 "bdev_name": "Null1", 00:13:38.563 "name": "Null1", 00:13:38.563 "nguid": "177943A1F1784AE6B79AD1595EDF653F", 00:13:38.563 "uuid": "177943a1-f178-4ae6-b79a-d1595edf653f" 00:13:38.563 } 00:13:38.563 ] 00:13:38.563 }, 00:13:38.563 { 00:13:38.563 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:38.563 "subtype": "NVMe", 00:13:38.563 "listen_addresses": [ 00:13:38.563 { 00:13:38.563 "trtype": "TCP", 00:13:38.563 "adrfam": "IPv4", 00:13:38.563 "traddr": "10.0.0.2", 00:13:38.563 "trsvcid": "4420" 00:13:38.563 } 00:13:38.563 ], 00:13:38.563 "allow_any_host": true, 00:13:38.563 "hosts": [], 00:13:38.563 "serial_number": "SPDK00000000000002", 00:13:38.563 "model_number": "SPDK bdev Controller", 00:13:38.563 "max_namespaces": 32, 00:13:38.563 "min_cntlid": 1, 00:13:38.563 "max_cntlid": 65519, 00:13:38.563 "namespaces": [ 00:13:38.563 { 00:13:38.563 "nsid": 1, 00:13:38.563 "bdev_name": "Null2", 00:13:38.563 "name": "Null2", 00:13:38.563 "nguid": "08CBD1A195094BB2B74A8055F2DCDD00", 00:13:38.563 "uuid": "08cbd1a1-9509-4bb2-b74a-8055f2dcdd00" 00:13:38.563 } 00:13:38.563 ] 00:13:38.563 }, 00:13:38.563 { 00:13:38.563 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:38.563 "subtype": "NVMe", 00:13:38.563 "listen_addresses": [ 00:13:38.563 { 00:13:38.563 "trtype": "TCP", 00:13:38.563 "adrfam": "IPv4", 00:13:38.563 "traddr": "10.0.0.2", 00:13:38.563 "trsvcid": "4420" 00:13:38.563 } 00:13:38.563 ], 00:13:38.563 "allow_any_host": true, 00:13:38.563 "hosts": [], 00:13:38.563 "serial_number": "SPDK00000000000003", 00:13:38.563 "model_number": "SPDK bdev Controller", 00:13:38.563 "max_namespaces": 32, 00:13:38.563 "min_cntlid": 1, 00:13:38.563 "max_cntlid": 65519, 00:13:38.563 "namespaces": [ 00:13:38.563 { 00:13:38.563 "nsid": 1, 00:13:38.563 "bdev_name": "Null3", 00:13:38.563 "name": "Null3", 00:13:38.563 "nguid": "D399041B9A2E4BB1AC066EED5CDB2F20", 00:13:38.563 "uuid": "d399041b-9a2e-4bb1-ac06-6eed5cdb2f20" 00:13:38.563 } 00:13:38.563 ] 00:13:38.563 }, 00:13:38.563 { 00:13:38.563 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:38.563 "subtype": "NVMe", 00:13:38.563 "listen_addresses": [ 00:13:38.563 { 00:13:38.563 "trtype": "TCP", 00:13:38.563 "adrfam": "IPv4", 00:13:38.563 "traddr": "10.0.0.2", 00:13:38.563 "trsvcid": "4420" 00:13:38.563 } 00:13:38.563 ], 00:13:38.563 "allow_any_host": true, 00:13:38.563 "hosts": [], 00:13:38.563 "serial_number": "SPDK00000000000004", 00:13:38.563 "model_number": "SPDK bdev Controller", 00:13:38.563 "max_namespaces": 32, 00:13:38.563 "min_cntlid": 1, 00:13:38.563 "max_cntlid": 65519, 00:13:38.563 "namespaces": [ 00:13:38.563 { 00:13:38.563 "nsid": 1, 00:13:38.563 "bdev_name": "Null4", 00:13:38.563 "name": "Null4", 00:13:38.563 "nguid": "3665BA5CFAA649D897695C271DFA1556", 00:13:38.563 "uuid": "3665ba5c-faa6-49d8-9769-5c271dfa1556" 00:13:38.563 } 00:13:38.563 ] 00:13:38.563 } 00:13:38.563 ] 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:38.563 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.564 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.564 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.564 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:38.564 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.564 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:38.826 rmmod nvme_tcp 00:13:38.826 rmmod nvme_fabrics 00:13:38.826 rmmod nvme_keyring 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 227176 ']' 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 227176 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 227176 ']' 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 227176 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 227176 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 227176' 00:13:38.826 killing process with pid 227176 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 227176 00:13:38.826 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 227176 00:13:39.088 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:39.088 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:39.088 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:39.088 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:39.088 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:39.088 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:39.088 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:39.088 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:39.088 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:39.088 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.088 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.088 09:31:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.005 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:41.005 00:13:41.005 real 0m11.580s 00:13:41.005 user 0m8.930s 00:13:41.005 sys 0m5.993s 00:13:41.005 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.005 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:41.005 ************************************ 00:13:41.005 END TEST nvmf_target_discovery 00:13:41.005 ************************************ 00:13:41.266 09:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:41.266 09:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:41.266 09:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.266 09:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:41.266 ************************************ 00:13:41.266 START TEST nvmf_referrals 00:13:41.266 ************************************ 00:13:41.266 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:41.266 * Looking for test storage... 00:13:41.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.266 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:41.266 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:13:41.266 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:41.266 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:41.266 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:41.266 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:41.266 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:41.266 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:41.266 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:41.266 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:41.266 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:41.266 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:41.266 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:41.266 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:41.266 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:41.266 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:41.266 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:41.266 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:41.266 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:41.266 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:41.266 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:41.266 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:41.266 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:41.266 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:41.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.528 --rc genhtml_branch_coverage=1 00:13:41.528 --rc genhtml_function_coverage=1 00:13:41.528 --rc genhtml_legend=1 00:13:41.528 --rc geninfo_all_blocks=1 00:13:41.528 --rc geninfo_unexecuted_blocks=1 00:13:41.528 00:13:41.528 ' 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:41.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.528 --rc genhtml_branch_coverage=1 00:13:41.528 --rc genhtml_function_coverage=1 00:13:41.528 --rc genhtml_legend=1 00:13:41.528 --rc geninfo_all_blocks=1 00:13:41.528 --rc geninfo_unexecuted_blocks=1 00:13:41.528 00:13:41.528 ' 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:41.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.528 --rc genhtml_branch_coverage=1 00:13:41.528 --rc genhtml_function_coverage=1 00:13:41.528 --rc genhtml_legend=1 00:13:41.528 --rc geninfo_all_blocks=1 00:13:41.528 --rc geninfo_unexecuted_blocks=1 00:13:41.528 00:13:41.528 ' 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:41.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.528 --rc genhtml_branch_coverage=1 00:13:41.528 --rc genhtml_function_coverage=1 00:13:41.528 --rc genhtml_legend=1 00:13:41.528 --rc geninfo_all_blocks=1 00:13:41.528 --rc geninfo_unexecuted_blocks=1 00:13:41.528 00:13:41.528 ' 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.528 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:41.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:41.529 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:49.679 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:49.679 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.679 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:49.680 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:49.680 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:49.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:13:49.680 00:13:49.680 --- 10.0.0.2 ping statistics --- 00:13:49.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.680 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:49.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:13:49.680 00:13:49.680 --- 10.0.0.1 ping statistics --- 00:13:49.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.680 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=231814 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 231814 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 231814 ']' 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.680 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.680 [2024-11-19 09:31:35.589833] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:49.680 [2024-11-19 09:31:35.589897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.680 [2024-11-19 09:31:35.685492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.680 [2024-11-19 09:31:35.738190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.680 [2024-11-19 09:31:35.738242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.680 [2024-11-19 09:31:35.738251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.680 [2024-11-19 09:31:35.738259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.680 [2024-11-19 09:31:35.738270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.680 [2024-11-19 09:31:35.740233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.680 [2024-11-19 09:31:35.740401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.680 [2024-11-19 09:31:35.740536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.680 [2024-11-19 09:31:35.740537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.680 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.680 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:49.680 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:49.680 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:49.680 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.943 [2024-11-19 09:31:36.472264] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.943 [2024-11-19 09:31:36.488575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.943 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:50.205 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:50.467 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:50.728 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:50.728 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:50.728 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:50.728 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:50.728 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:50.728 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:50.728 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:50.728 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:50.728 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:50.728 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:50.990 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:50.990 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:50.990 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:50.990 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:50.990 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:50.990 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:51.251 09:31:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:51.513 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:51.513 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:51.513 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:51.513 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:51.513 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:51.513 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:51.513 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:51.775 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:51.775 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:51.775 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:51.775 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:51.775 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:51.775 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:51.775 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:51.775 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:51.775 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.775 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:51.775 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.775 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:51.775 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.775 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:51.775 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:52.036 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.036 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:52.036 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:52.036 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:52.036 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:52.036 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:52.036 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:52.036 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:52.297 rmmod nvme_tcp 00:13:52.297 rmmod nvme_fabrics 00:13:52.297 rmmod nvme_keyring 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 231814 ']' 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 231814 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 231814 ']' 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 231814 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 231814 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 231814' 00:13:52.297 killing process with pid 231814 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 231814 00:13:52.297 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 231814 00:13:52.297 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:52.297 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:52.297 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:52.297 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:52.297 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:52.297 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:52.297 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:52.558 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.558 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:52.558 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.558 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.558 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.478 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:54.478 00:13:54.478 real 0m13.309s 00:13:54.478 user 0m16.302s 00:13:54.478 sys 0m6.514s 00:13:54.478 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.478 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:54.478 ************************************ 00:13:54.478 END TEST nvmf_referrals 00:13:54.478 ************************************ 00:13:54.478 09:31:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:54.478 09:31:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:54.478 09:31:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.478 09:31:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:54.478 ************************************ 00:13:54.478 START TEST nvmf_connect_disconnect 00:13:54.478 ************************************ 00:13:54.478 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:54.740 * Looking for test storage... 00:13:54.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:54.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.740 --rc genhtml_branch_coverage=1 00:13:54.740 --rc genhtml_function_coverage=1 00:13:54.740 --rc genhtml_legend=1 00:13:54.740 --rc geninfo_all_blocks=1 00:13:54.740 --rc geninfo_unexecuted_blocks=1 00:13:54.740 00:13:54.740 ' 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:54.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.740 --rc genhtml_branch_coverage=1 00:13:54.740 --rc genhtml_function_coverage=1 00:13:54.740 --rc genhtml_legend=1 00:13:54.740 --rc geninfo_all_blocks=1 00:13:54.740 --rc geninfo_unexecuted_blocks=1 00:13:54.740 00:13:54.740 ' 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:54.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.740 --rc genhtml_branch_coverage=1 00:13:54.740 --rc genhtml_function_coverage=1 00:13:54.740 --rc genhtml_legend=1 00:13:54.740 --rc geninfo_all_blocks=1 00:13:54.740 --rc geninfo_unexecuted_blocks=1 00:13:54.740 00:13:54.740 ' 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:54.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.740 --rc genhtml_branch_coverage=1 00:13:54.740 --rc genhtml_function_coverage=1 00:13:54.740 --rc genhtml_legend=1 00:13:54.740 --rc geninfo_all_blocks=1 00:13:54.740 --rc geninfo_unexecuted_blocks=1 00:13:54.740 00:13:54.740 ' 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.740 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:54.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:54.741 09:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:02.888 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:02.889 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:02.889 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:02.889 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:02.889 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:02.889 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:02.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:14:02.890 00:14:02.890 --- 10.0.0.2 ping statistics --- 00:14:02.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.890 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:14:02.890 00:14:02.890 --- 10.0.0.1 ping statistics --- 00:14:02.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.890 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=236641 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 236641 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 236641 ']' 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.890 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:02.890 [2024-11-19 09:31:48.958785] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:14:02.890 [2024-11-19 09:31:48.958855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.890 [2024-11-19 09:31:49.057846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.890 [2024-11-19 09:31:49.112136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.890 [2024-11-19 09:31:49.112193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.890 [2024-11-19 09:31:49.112202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.890 [2024-11-19 09:31:49.112210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.890 [2024-11-19 09:31:49.112216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.890 [2024-11-19 09:31:49.114187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.890 [2024-11-19 09:31:49.114294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.890 [2024-11-19 09:31:49.114589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.890 [2024-11-19 09:31:49.114592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:03.153 [2024-11-19 09:31:49.834895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.153 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:03.415 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.415 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:03.415 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.415 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:03.415 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.415 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.415 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.415 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:03.415 [2024-11-19 09:31:49.917811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.415 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.415 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:03.415 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:03.415 09:31:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:07.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:21.738 rmmod nvme_tcp 00:14:21.738 rmmod nvme_fabrics 00:14:21.738 rmmod nvme_keyring 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 236641 ']' 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 236641 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 236641 ']' 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 236641 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 236641 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 236641' 00:14:21.738 killing process with pid 236641 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 236641 00:14:21.738 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 236641 00:14:21.739 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:21.739 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:21.739 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:21.739 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:14:21.739 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:14:21.739 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:21.739 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:14:21.739 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:21.739 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:21.739 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.739 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.739 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:24.299 00:14:24.299 real 0m29.232s 00:14:24.299 user 1m18.802s 00:14:24.299 sys 0m7.101s 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:24.299 ************************************ 00:14:24.299 END TEST nvmf_connect_disconnect 00:14:24.299 ************************************ 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:24.299 ************************************ 00:14:24.299 START TEST nvmf_multitarget 00:14:24.299 ************************************ 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:24.299 * Looking for test storage... 00:14:24.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:24.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.299 --rc genhtml_branch_coverage=1 00:14:24.299 --rc genhtml_function_coverage=1 00:14:24.299 --rc genhtml_legend=1 00:14:24.299 --rc geninfo_all_blocks=1 00:14:24.299 --rc geninfo_unexecuted_blocks=1 00:14:24.299 00:14:24.299 ' 00:14:24.299 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:24.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.299 --rc genhtml_branch_coverage=1 00:14:24.299 --rc genhtml_function_coverage=1 00:14:24.299 --rc genhtml_legend=1 00:14:24.299 --rc geninfo_all_blocks=1 00:14:24.299 --rc geninfo_unexecuted_blocks=1 00:14:24.299 00:14:24.299 ' 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:24.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.300 --rc genhtml_branch_coverage=1 00:14:24.300 --rc genhtml_function_coverage=1 00:14:24.300 --rc genhtml_legend=1 00:14:24.300 --rc geninfo_all_blocks=1 00:14:24.300 --rc geninfo_unexecuted_blocks=1 00:14:24.300 00:14:24.300 ' 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:24.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.300 --rc genhtml_branch_coverage=1 00:14:24.300 --rc genhtml_function_coverage=1 00:14:24.300 --rc genhtml_legend=1 00:14:24.300 --rc geninfo_all_blocks=1 00:14:24.300 --rc geninfo_unexecuted_blocks=1 00:14:24.300 00:14:24.300 ' 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:24.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:14:24.300 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.448 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:32.449 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:32.449 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:32.449 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:32.449 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:32.449 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:32.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:14:32.449 00:14:32.449 --- 10.0.0.2 ping statistics --- 00:14:32.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.449 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:32.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:14:32.449 00:14:32.449 --- 10.0.0.1 ping statistics --- 00:14:32.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.449 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=244766 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 244766 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 244766 ']' 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.449 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:32.449 [2024-11-19 09:32:18.231964] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:14:32.449 [2024-11-19 09:32:18.232031] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.449 [2024-11-19 09:32:18.332637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:32.450 [2024-11-19 09:32:18.385131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.450 [2024-11-19 09:32:18.385197] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.450 [2024-11-19 09:32:18.385207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.450 [2024-11-19 09:32:18.385216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.450 [2024-11-19 09:32:18.385223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.450 [2024-11-19 09:32:18.387323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.450 [2024-11-19 09:32:18.387484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.450 [2024-11-19 09:32:18.387643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:32.450 [2024-11-19 09:32:18.387644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.450 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.450 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:14:32.450 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:32.450 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:32.450 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:32.450 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.450 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:32.450 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:32.450 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:32.712 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:32.712 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:32.712 "nvmf_tgt_1" 00:14:32.712 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:32.712 "nvmf_tgt_2" 00:14:32.975 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:32.975 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:32.975 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:32.975 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:32.975 true 00:14:32.975 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:33.236 true 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:33.236 rmmod nvme_tcp 00:14:33.236 rmmod nvme_fabrics 00:14:33.236 rmmod nvme_keyring 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 244766 ']' 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 244766 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 244766 ']' 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 244766 00:14:33.236 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:14:33.497 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.497 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 244766 00:14:33.497 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.497 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.497 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 244766' 00:14:33.497 killing process with pid 244766 00:14:33.497 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 244766 00:14:33.497 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 244766 00:14:33.497 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:33.497 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:33.497 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:33.497 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:33.497 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:14:33.497 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:14:33.497 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:33.497 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:33.497 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:33.497 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.497 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:33.497 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:36.045 00:14:36.045 real 0m11.773s 00:14:36.045 user 0m10.266s 00:14:36.045 sys 0m6.125s 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:36.045 ************************************ 00:14:36.045 END TEST nvmf_multitarget 00:14:36.045 ************************************ 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:36.045 ************************************ 00:14:36.045 START TEST nvmf_rpc 00:14:36.045 ************************************ 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:36.045 * Looking for test storage... 00:14:36.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:36.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.045 --rc genhtml_branch_coverage=1 00:14:36.045 --rc genhtml_function_coverage=1 00:14:36.045 --rc genhtml_legend=1 00:14:36.045 --rc geninfo_all_blocks=1 00:14:36.045 --rc geninfo_unexecuted_blocks=1 00:14:36.045 00:14:36.045 ' 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:36.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.045 --rc genhtml_branch_coverage=1 00:14:36.045 --rc genhtml_function_coverage=1 00:14:36.045 --rc genhtml_legend=1 00:14:36.045 --rc geninfo_all_blocks=1 00:14:36.045 --rc geninfo_unexecuted_blocks=1 00:14:36.045 00:14:36.045 ' 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:36.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.045 --rc genhtml_branch_coverage=1 00:14:36.045 --rc genhtml_function_coverage=1 00:14:36.045 --rc genhtml_legend=1 00:14:36.045 --rc geninfo_all_blocks=1 00:14:36.045 --rc geninfo_unexecuted_blocks=1 00:14:36.045 00:14:36.045 ' 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:36.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.045 --rc genhtml_branch_coverage=1 00:14:36.045 --rc genhtml_function_coverage=1 00:14:36.045 --rc genhtml_legend=1 00:14:36.045 --rc geninfo_all_blocks=1 00:14:36.045 --rc geninfo_unexecuted_blocks=1 00:14:36.045 00:14:36.045 ' 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.045 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:36.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:36.046 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:44.195 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:44.196 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:44.196 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:44.196 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:44.196 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:44.196 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:44.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:14:44.196 00:14:44.196 --- 10.0.0.2 ping statistics --- 00:14:44.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.196 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:44.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:14:44.196 00:14:44.196 --- 10.0.0.1 ping statistics --- 00:14:44.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.196 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=249303 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 249303 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 249303 ']' 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.196 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.197 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.197 [2024-11-19 09:32:30.165567] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:14:44.197 [2024-11-19 09:32:30.165642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.197 [2024-11-19 09:32:30.262902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:44.197 [2024-11-19 09:32:30.317995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.197 [2024-11-19 09:32:30.318046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.197 [2024-11-19 09:32:30.318056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.197 [2024-11-19 09:32:30.318064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.197 [2024-11-19 09:32:30.318070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.197 [2024-11-19 09:32:30.320323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.197 [2024-11-19 09:32:30.320628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.197 [2024-11-19 09:32:30.320791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:44.197 [2024-11-19 09:32:30.320792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.459 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.459 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:44.459 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:44.459 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:44.459 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:44.459 "tick_rate": 2400000000, 00:14:44.459 "poll_groups": [ 00:14:44.459 { 00:14:44.459 "name": "nvmf_tgt_poll_group_000", 00:14:44.459 "admin_qpairs": 0, 00:14:44.459 "io_qpairs": 0, 00:14:44.459 "current_admin_qpairs": 0, 00:14:44.459 "current_io_qpairs": 0, 00:14:44.459 "pending_bdev_io": 0, 00:14:44.459 "completed_nvme_io": 0, 00:14:44.459 "transports": [] 00:14:44.459 }, 00:14:44.459 { 00:14:44.459 "name": "nvmf_tgt_poll_group_001", 00:14:44.459 "admin_qpairs": 0, 00:14:44.459 "io_qpairs": 0, 00:14:44.459 "current_admin_qpairs": 0, 00:14:44.459 "current_io_qpairs": 0, 00:14:44.459 "pending_bdev_io": 0, 00:14:44.459 "completed_nvme_io": 0, 00:14:44.459 "transports": [] 00:14:44.459 }, 00:14:44.459 { 00:14:44.459 "name": "nvmf_tgt_poll_group_002", 00:14:44.459 "admin_qpairs": 0, 00:14:44.459 "io_qpairs": 0, 00:14:44.459 "current_admin_qpairs": 0, 00:14:44.459 "current_io_qpairs": 0, 00:14:44.459 "pending_bdev_io": 0, 00:14:44.459 "completed_nvme_io": 0, 00:14:44.459 "transports": [] 00:14:44.459 }, 00:14:44.459 { 00:14:44.459 "name": "nvmf_tgt_poll_group_003", 00:14:44.459 "admin_qpairs": 0, 00:14:44.459 "io_qpairs": 0, 00:14:44.459 "current_admin_qpairs": 0, 00:14:44.459 "current_io_qpairs": 0, 00:14:44.459 "pending_bdev_io": 0, 00:14:44.459 "completed_nvme_io": 0, 00:14:44.459 "transports": [] 00:14:44.459 } 00:14:44.459 ] 00:14:44.459 }' 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.459 [2024-11-19 09:32:31.168182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:44.459 "tick_rate": 2400000000, 00:14:44.459 "poll_groups": [ 00:14:44.459 { 00:14:44.459 "name": "nvmf_tgt_poll_group_000", 00:14:44.459 "admin_qpairs": 0, 00:14:44.459 "io_qpairs": 0, 00:14:44.459 "current_admin_qpairs": 0, 00:14:44.459 "current_io_qpairs": 0, 00:14:44.459 "pending_bdev_io": 0, 00:14:44.459 "completed_nvme_io": 0, 00:14:44.459 "transports": [ 00:14:44.459 { 00:14:44.459 "trtype": "TCP" 00:14:44.459 } 00:14:44.459 ] 00:14:44.459 }, 00:14:44.459 { 00:14:44.459 "name": "nvmf_tgt_poll_group_001", 00:14:44.459 "admin_qpairs": 0, 00:14:44.459 "io_qpairs": 0, 00:14:44.459 "current_admin_qpairs": 0, 00:14:44.459 "current_io_qpairs": 0, 00:14:44.459 "pending_bdev_io": 0, 00:14:44.459 "completed_nvme_io": 0, 00:14:44.459 "transports": [ 00:14:44.459 { 00:14:44.459 "trtype": "TCP" 00:14:44.459 } 00:14:44.459 ] 00:14:44.459 }, 00:14:44.459 { 00:14:44.459 "name": "nvmf_tgt_poll_group_002", 00:14:44.459 "admin_qpairs": 0, 00:14:44.459 "io_qpairs": 0, 00:14:44.459 "current_admin_qpairs": 0, 00:14:44.459 "current_io_qpairs": 0, 00:14:44.459 "pending_bdev_io": 0, 00:14:44.459 "completed_nvme_io": 0, 00:14:44.459 "transports": [ 00:14:44.459 { 00:14:44.459 "trtype": "TCP" 00:14:44.459 } 00:14:44.459 ] 00:14:44.459 }, 00:14:44.459 { 00:14:44.459 "name": "nvmf_tgt_poll_group_003", 00:14:44.459 "admin_qpairs": 0, 00:14:44.459 "io_qpairs": 0, 00:14:44.459 "current_admin_qpairs": 0, 00:14:44.459 "current_io_qpairs": 0, 00:14:44.459 "pending_bdev_io": 0, 00:14:44.459 "completed_nvme_io": 0, 00:14:44.459 "transports": [ 00:14:44.459 { 00:14:44.459 "trtype": "TCP" 00:14:44.459 } 00:14:44.459 ] 00:14:44.459 } 00:14:44.459 ] 00:14:44.459 }' 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:44.459 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.721 Malloc1 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.721 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.722 [2024-11-19 09:32:31.379798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:44.722 [2024-11-19 09:32:31.416901] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:14:44.722 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:44.722 could not add new controller: failed to write to nvme-fabrics device 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.722 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:46.637 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:46.637 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:46.637 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:46.637 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:46.637 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:48.550 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:48.550 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:48.550 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:48.550 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:48.550 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:48.550 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:48.550 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:48.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:48.550 [2024-11-19 09:32:35.132639] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:14:48.550 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:48.550 could not add new controller: failed to write to nvme-fabrics device 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.550 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:50.464 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:50.464 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:50.464 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:50.464 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:50.464 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:52.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.378 [2024-11-19 09:32:38.935970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.378 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:53.763 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:53.763 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:53.763 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:53.763 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:53.763 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:56.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.310 [2024-11-19 09:32:42.651009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.310 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:57.696 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:57.696 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:57.696 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:57.696 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:57.696 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:59.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.610 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.871 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.871 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.871 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.871 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.871 [2024-11-19 09:32:46.366156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.871 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.871 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:59.871 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.871 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.871 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.871 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.871 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.871 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.871 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.871 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:01.256 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:01.256 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:01.256 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:01.256 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:01.256 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:03.177 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:03.177 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:03.177 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:03.177 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:03.177 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:03.177 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:03.177 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:03.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.438 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.699 [2024-11-19 09:32:50.188149] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.699 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.699 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:03.699 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.699 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.699 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.699 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:03.699 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.699 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.699 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.699 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:05.084 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:05.084 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:05.084 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:05.084 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:05.084 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:06.999 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:06.999 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:06.999 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:06.999 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:06.999 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:06.999 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:06.999 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:07.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.260 [2024-11-19 09:32:53.899576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.260 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:09.176 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:09.176 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:09.176 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:09.176 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:09.176 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.094 [2024-11-19 09:32:57.661647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.094 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.095 [2024-11-19 09:32:57.733844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.095 [2024-11-19 09:32:57.802026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.095 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.358 [2024-11-19 09:32:57.870247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.358 [2024-11-19 09:32:57.934512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.358 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.358 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.358 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:11.358 "tick_rate": 2400000000, 00:15:11.358 "poll_groups": [ 00:15:11.358 { 00:15:11.358 "name": "nvmf_tgt_poll_group_000", 00:15:11.358 "admin_qpairs": 0, 00:15:11.358 "io_qpairs": 224, 00:15:11.358 "current_admin_qpairs": 0, 00:15:11.358 "current_io_qpairs": 0, 00:15:11.358 "pending_bdev_io": 0, 00:15:11.358 "completed_nvme_io": 227, 00:15:11.358 "transports": [ 00:15:11.358 { 00:15:11.358 "trtype": "TCP" 00:15:11.358 } 00:15:11.358 ] 00:15:11.358 }, 00:15:11.358 { 00:15:11.358 "name": "nvmf_tgt_poll_group_001", 00:15:11.358 "admin_qpairs": 1, 00:15:11.358 "io_qpairs": 223, 00:15:11.358 "current_admin_qpairs": 0, 00:15:11.358 "current_io_qpairs": 0, 00:15:11.358 "pending_bdev_io": 0, 00:15:11.358 "completed_nvme_io": 274, 00:15:11.358 "transports": [ 00:15:11.358 { 00:15:11.358 "trtype": "TCP" 00:15:11.358 } 00:15:11.358 ] 00:15:11.358 }, 00:15:11.358 { 00:15:11.358 "name": "nvmf_tgt_poll_group_002", 00:15:11.358 "admin_qpairs": 6, 00:15:11.358 "io_qpairs": 218, 00:15:11.358 "current_admin_qpairs": 0, 00:15:11.358 "current_io_qpairs": 0, 00:15:11.358 "pending_bdev_io": 0, 00:15:11.358 "completed_nvme_io": 258, 00:15:11.358 "transports": [ 00:15:11.358 { 00:15:11.358 "trtype": "TCP" 00:15:11.358 } 00:15:11.358 ] 00:15:11.358 }, 00:15:11.358 { 00:15:11.358 "name": "nvmf_tgt_poll_group_003", 00:15:11.358 "admin_qpairs": 0, 00:15:11.358 "io_qpairs": 224, 00:15:11.358 "current_admin_qpairs": 0, 00:15:11.358 "current_io_qpairs": 0, 00:15:11.358 "pending_bdev_io": 0, 00:15:11.358 "completed_nvme_io": 480, 00:15:11.358 "transports": [ 00:15:11.358 { 00:15:11.358 "trtype": "TCP" 00:15:11.358 } 00:15:11.358 ] 00:15:11.358 } 00:15:11.358 ] 00:15:11.358 }' 00:15:11.358 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:11.358 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:11.358 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:11.358 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:11.358 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:11.358 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:11.358 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:11.358 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:11.358 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:11.621 rmmod nvme_tcp 00:15:11.621 rmmod nvme_fabrics 00:15:11.621 rmmod nvme_keyring 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 249303 ']' 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 249303 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 249303 ']' 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 249303 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 249303 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 249303' 00:15:11.621 killing process with pid 249303 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 249303 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 249303 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:11.621 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:15:11.883 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:15:11.883 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:11.883 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:15:11.883 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:11.883 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:11.883 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.883 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:11.883 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.802 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:13.802 00:15:13.802 real 0m38.066s 00:15:13.802 user 1m54.152s 00:15:13.802 sys 0m7.865s 00:15:13.802 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.802 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.802 ************************************ 00:15:13.802 END TEST nvmf_rpc 00:15:13.802 ************************************ 00:15:13.802 09:33:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:13.802 09:33:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:13.802 09:33:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.802 09:33:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:13.802 ************************************ 00:15:13.802 START TEST nvmf_invalid 00:15:13.802 ************************************ 00:15:13.802 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:14.064 * Looking for test storage... 00:15:14.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:14.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.064 --rc genhtml_branch_coverage=1 00:15:14.064 --rc genhtml_function_coverage=1 00:15:14.064 --rc genhtml_legend=1 00:15:14.064 --rc geninfo_all_blocks=1 00:15:14.064 --rc geninfo_unexecuted_blocks=1 00:15:14.064 00:15:14.064 ' 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:14.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.064 --rc genhtml_branch_coverage=1 00:15:14.064 --rc genhtml_function_coverage=1 00:15:14.064 --rc genhtml_legend=1 00:15:14.064 --rc geninfo_all_blocks=1 00:15:14.064 --rc geninfo_unexecuted_blocks=1 00:15:14.064 00:15:14.064 ' 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:14.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.064 --rc genhtml_branch_coverage=1 00:15:14.064 --rc genhtml_function_coverage=1 00:15:14.064 --rc genhtml_legend=1 00:15:14.064 --rc geninfo_all_blocks=1 00:15:14.064 --rc geninfo_unexecuted_blocks=1 00:15:14.064 00:15:14.064 ' 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:14.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.064 --rc genhtml_branch_coverage=1 00:15:14.064 --rc genhtml_function_coverage=1 00:15:14.064 --rc genhtml_legend=1 00:15:14.064 --rc geninfo_all_blocks=1 00:15:14.064 --rc geninfo_unexecuted_blocks=1 00:15:14.064 00:15:14.064 ' 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.064 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:14.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:15:14.065 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:22.210 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.210 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:15:22.210 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:22.211 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:22.211 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:22.211 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:22.211 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.211 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.211 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.211 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.211 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:22.211 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.211 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.211 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.211 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:22.211 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:22.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:15:22.211 00:15:22.211 --- 10.0.0.2 ping statistics --- 00:15:22.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.211 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:15:22.211 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:15:22.211 00:15:22.211 --- 10.0.0.1 ping statistics --- 00:15:22.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.211 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:15:22.211 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.211 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=259149 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 259149 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 259149 ']' 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.212 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:22.212 [2024-11-19 09:33:08.345124] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:22.212 [2024-11-19 09:33:08.345202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.212 [2024-11-19 09:33:08.427073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.212 [2024-11-19 09:33:08.482447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.212 [2024-11-19 09:33:08.482496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.212 [2024-11-19 09:33:08.482505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.212 [2024-11-19 09:33:08.482512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.212 [2024-11-19 09:33:08.482519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.212 [2024-11-19 09:33:08.484821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.212 [2024-11-19 09:33:08.484986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.212 [2024-11-19 09:33:08.485148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.212 [2024-11-19 09:33:08.485148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.473 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.473 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:15:22.473 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:22.473 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:22.473 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:22.735 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.735 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:22.735 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16352 00:15:22.735 [2024-11-19 09:33:09.392511] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:22.735 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:22.735 { 00:15:22.735 "nqn": "nqn.2016-06.io.spdk:cnode16352", 00:15:22.735 "tgt_name": "foobar", 00:15:22.735 "method": "nvmf_create_subsystem", 00:15:22.735 "req_id": 1 00:15:22.735 } 00:15:22.735 Got JSON-RPC error response 00:15:22.735 response: 00:15:22.735 { 00:15:22.735 "code": -32603, 00:15:22.735 "message": "Unable to find target foobar" 00:15:22.735 }' 00:15:22.735 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:22.735 { 00:15:22.735 "nqn": "nqn.2016-06.io.spdk:cnode16352", 00:15:22.735 "tgt_name": "foobar", 00:15:22.735 "method": "nvmf_create_subsystem", 00:15:22.735 "req_id": 1 00:15:22.735 } 00:15:22.735 Got JSON-RPC error response 00:15:22.735 response: 00:15:22.735 { 00:15:22.735 "code": -32603, 00:15:22.735 "message": "Unable to find target foobar" 00:15:22.735 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:22.735 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:22.735 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16659 00:15:22.997 [2024-11-19 09:33:09.601353] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16659: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:22.997 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:22.997 { 00:15:22.997 "nqn": "nqn.2016-06.io.spdk:cnode16659", 00:15:22.997 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:22.997 "method": "nvmf_create_subsystem", 00:15:22.997 "req_id": 1 00:15:22.997 } 00:15:22.997 Got JSON-RPC error response 00:15:22.997 response: 00:15:22.997 { 00:15:22.997 "code": -32602, 00:15:22.997 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:22.997 }' 00:15:22.997 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:22.997 { 00:15:22.997 "nqn": "nqn.2016-06.io.spdk:cnode16659", 00:15:22.997 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:22.997 "method": "nvmf_create_subsystem", 00:15:22.997 "req_id": 1 00:15:22.997 } 00:15:22.997 Got JSON-RPC error response 00:15:22.997 response: 00:15:22.997 { 00:15:22.997 "code": -32602, 00:15:22.997 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:22.997 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:22.997 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:22.997 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31365 00:15:23.258 [2024-11-19 09:33:09.810099] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31365: invalid model number 'SPDK_Controller' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:23.258 { 00:15:23.258 "nqn": "nqn.2016-06.io.spdk:cnode31365", 00:15:23.258 "model_number": "SPDK_Controller\u001f", 00:15:23.258 "method": "nvmf_create_subsystem", 00:15:23.258 "req_id": 1 00:15:23.258 } 00:15:23.258 Got JSON-RPC error response 00:15:23.258 response: 00:15:23.258 { 00:15:23.258 "code": -32602, 00:15:23.258 "message": "Invalid MN SPDK_Controller\u001f" 00:15:23.258 }' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:23.258 { 00:15:23.258 "nqn": "nqn.2016-06.io.spdk:cnode31365", 00:15:23.258 "model_number": "SPDK_Controller\u001f", 00:15:23.258 "method": "nvmf_create_subsystem", 00:15:23.258 "req_id": 1 00:15:23.258 } 00:15:23.258 Got JSON-RPC error response 00:15:23.258 response: 00:15:23.258 { 00:15:23.258 "code": -32602, 00:15:23.258 "message": "Invalid MN SPDK_Controller\u001f" 00:15:23.258 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.258 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.259 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.521 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:15:23.521 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:23.521 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:15:23.521 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.521 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.521 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:15:23.521 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:23.521 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:15:23.521 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.521 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.521 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ > == \- ]] 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '>J\L1I@aw(8Y`p9hCXdez' 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '>J\L1I@aw(8Y`p9hCXdez' nqn.2016-06.io.spdk:cnode26461 00:15:23.522 [2024-11-19 09:33:10.191896] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26461: invalid serial number '>J\L1I@aw(8Y`p9hCXdez' 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:23.522 { 00:15:23.522 "nqn": "nqn.2016-06.io.spdk:cnode26461", 00:15:23.522 "serial_number": ">J\\L1I@aw(8Y`p9hCXdez", 00:15:23.522 "method": "nvmf_create_subsystem", 00:15:23.522 "req_id": 1 00:15:23.522 } 00:15:23.522 Got JSON-RPC error response 00:15:23.522 response: 00:15:23.522 { 00:15:23.522 "code": -32602, 00:15:23.522 "message": "Invalid SN >J\\L1I@aw(8Y`p9hCXdez" 00:15:23.522 }' 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:23.522 { 00:15:23.522 "nqn": "nqn.2016-06.io.spdk:cnode26461", 00:15:23.522 "serial_number": ">J\\L1I@aw(8Y`p9hCXdez", 00:15:23.522 "method": "nvmf_create_subsystem", 00:15:23.522 "req_id": 1 00:15:23.522 } 00:15:23.522 Got JSON-RPC error response 00:15:23.522 response: 00:15:23.522 { 00:15:23.522 "code": -32602, 00:15:23.522 "message": "Invalid SN >J\\L1I@aw(8Y`p9hCXdez" 00:15:23.522 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.522 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:23.786 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:23.787 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:24.050 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:24.051 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.051 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.051 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:24.051 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:24.051 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:24.051 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:24.051 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:24.051 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ' == \- ]] 00:15:24.051 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ''\''K`rXYBrXHg6+*}LXC`LfcZ1)oj5,05KFIhfaD;|i' 00:15:24.051 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ''\''K`rXYBrXHg6+*}LXC`LfcZ1)oj5,05KFIhfaD;|i' nqn.2016-06.io.spdk:cnode17555 00:15:24.051 [2024-11-19 09:33:10.738039] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17555: invalid model number ''K`rXYBrXHg6+*}LXC`LfcZ1)oj5,05KFIhfaD;|i' 00:15:24.051 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:24.051 { 00:15:24.051 "nqn": "nqn.2016-06.io.spdk:cnode17555", 00:15:24.051 "model_number": "'\''K`rXYBrXHg6+*}LXC`LfcZ1)oj5,05KFIhfaD;|i", 00:15:24.051 "method": "nvmf_create_subsystem", 00:15:24.051 "req_id": 1 00:15:24.051 } 00:15:24.051 Got JSON-RPC error response 00:15:24.051 response: 00:15:24.051 { 00:15:24.051 "code": -32602, 00:15:24.051 "message": "Invalid MN '\''K`rXYBrXHg6+*}LXC`LfcZ1)oj5,05KFIhfaD;|i" 00:15:24.051 }' 00:15:24.051 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:24.051 { 00:15:24.051 "nqn": "nqn.2016-06.io.spdk:cnode17555", 00:15:24.051 "model_number": "'K`rXYBrXHg6+*}LXC`LfcZ1)oj5,05KFIhfaD;|i", 00:15:24.051 "method": "nvmf_create_subsystem", 00:15:24.051 "req_id": 1 00:15:24.051 } 00:15:24.051 Got JSON-RPC error response 00:15:24.051 response: 00:15:24.051 { 00:15:24.051 "code": -32602, 00:15:24.051 "message": "Invalid MN 'K`rXYBrXHg6+*}LXC`LfcZ1)oj5,05KFIhfaD;|i" 00:15:24.051 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:24.051 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:24.314 [2024-11-19 09:33:10.930873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.314 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:24.576 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:24.576 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:24.576 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:24.576 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:24.576 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:24.838 [2024-11-19 09:33:11.348430] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:24.838 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:24.838 { 00:15:24.838 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:24.838 "listen_address": { 00:15:24.838 "trtype": "tcp", 00:15:24.838 "traddr": "", 00:15:24.838 "trsvcid": "4421" 00:15:24.838 }, 00:15:24.838 "method": "nvmf_subsystem_remove_listener", 00:15:24.838 "req_id": 1 00:15:24.838 } 00:15:24.838 Got JSON-RPC error response 00:15:24.838 response: 00:15:24.838 { 00:15:24.838 "code": -32602, 00:15:24.838 "message": "Invalid parameters" 00:15:24.838 }' 00:15:24.838 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:24.838 { 00:15:24.838 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:24.838 "listen_address": { 00:15:24.838 "trtype": "tcp", 00:15:24.838 "traddr": "", 00:15:24.838 "trsvcid": "4421" 00:15:24.838 }, 00:15:24.838 "method": "nvmf_subsystem_remove_listener", 00:15:24.838 "req_id": 1 00:15:24.838 } 00:15:24.838 Got JSON-RPC error response 00:15:24.838 response: 00:15:24.838 { 00:15:24.838 "code": -32602, 00:15:24.838 "message": "Invalid parameters" 00:15:24.838 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:24.838 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8573 -i 0 00:15:24.838 [2024-11-19 09:33:11.557219] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8573: invalid cntlid range [0-65519] 00:15:25.100 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:25.100 { 00:15:25.100 "nqn": "nqn.2016-06.io.spdk:cnode8573", 00:15:25.100 "min_cntlid": 0, 00:15:25.100 "method": "nvmf_create_subsystem", 00:15:25.100 "req_id": 1 00:15:25.100 } 00:15:25.100 Got JSON-RPC error response 00:15:25.100 response: 00:15:25.100 { 00:15:25.100 "code": -32602, 00:15:25.100 "message": "Invalid cntlid range [0-65519]" 00:15:25.100 }' 00:15:25.100 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:25.100 { 00:15:25.100 "nqn": "nqn.2016-06.io.spdk:cnode8573", 00:15:25.100 "min_cntlid": 0, 00:15:25.100 "method": "nvmf_create_subsystem", 00:15:25.100 "req_id": 1 00:15:25.100 } 00:15:25.100 Got JSON-RPC error response 00:15:25.100 response: 00:15:25.100 { 00:15:25.100 "code": -32602, 00:15:25.100 "message": "Invalid cntlid range [0-65519]" 00:15:25.100 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:25.100 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21868 -i 65520 00:15:25.100 [2024-11-19 09:33:11.761908] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21868: invalid cntlid range [65520-65519] 00:15:25.100 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:25.100 { 00:15:25.100 "nqn": "nqn.2016-06.io.spdk:cnode21868", 00:15:25.100 "min_cntlid": 65520, 00:15:25.100 "method": "nvmf_create_subsystem", 00:15:25.100 "req_id": 1 00:15:25.100 } 00:15:25.100 Got JSON-RPC error response 00:15:25.100 response: 00:15:25.100 { 00:15:25.100 "code": -32602, 00:15:25.100 "message": "Invalid cntlid range [65520-65519]" 00:15:25.100 }' 00:15:25.100 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:25.100 { 00:15:25.100 "nqn": "nqn.2016-06.io.spdk:cnode21868", 00:15:25.100 "min_cntlid": 65520, 00:15:25.100 "method": "nvmf_create_subsystem", 00:15:25.100 "req_id": 1 00:15:25.100 } 00:15:25.100 Got JSON-RPC error response 00:15:25.100 response: 00:15:25.100 { 00:15:25.100 "code": -32602, 00:15:25.100 "message": "Invalid cntlid range [65520-65519]" 00:15:25.100 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:25.100 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16097 -I 0 00:15:25.361 [2024-11-19 09:33:11.950510] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16097: invalid cntlid range [1-0] 00:15:25.361 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:25.361 { 00:15:25.361 "nqn": "nqn.2016-06.io.spdk:cnode16097", 00:15:25.361 "max_cntlid": 0, 00:15:25.361 "method": "nvmf_create_subsystem", 00:15:25.361 "req_id": 1 00:15:25.361 } 00:15:25.361 Got JSON-RPC error response 00:15:25.361 response: 00:15:25.361 { 00:15:25.361 "code": -32602, 00:15:25.361 "message": "Invalid cntlid range [1-0]" 00:15:25.361 }' 00:15:25.361 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:25.361 { 00:15:25.361 "nqn": "nqn.2016-06.io.spdk:cnode16097", 00:15:25.361 "max_cntlid": 0, 00:15:25.361 "method": "nvmf_create_subsystem", 00:15:25.361 "req_id": 1 00:15:25.361 } 00:15:25.361 Got JSON-RPC error response 00:15:25.361 response: 00:15:25.361 { 00:15:25.361 "code": -32602, 00:15:25.361 "message": "Invalid cntlid range [1-0]" 00:15:25.361 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:25.361 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22639 -I 65520 00:15:25.623 [2024-11-19 09:33:12.135076] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22639: invalid cntlid range [1-65520] 00:15:25.623 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:25.623 { 00:15:25.623 "nqn": "nqn.2016-06.io.spdk:cnode22639", 00:15:25.623 "max_cntlid": 65520, 00:15:25.623 "method": "nvmf_create_subsystem", 00:15:25.623 "req_id": 1 00:15:25.623 } 00:15:25.623 Got JSON-RPC error response 00:15:25.623 response: 00:15:25.623 { 00:15:25.623 "code": -32602, 00:15:25.623 "message": "Invalid cntlid range [1-65520]" 00:15:25.623 }' 00:15:25.623 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:25.623 { 00:15:25.623 "nqn": "nqn.2016-06.io.spdk:cnode22639", 00:15:25.623 "max_cntlid": 65520, 00:15:25.623 "method": "nvmf_create_subsystem", 00:15:25.623 "req_id": 1 00:15:25.623 } 00:15:25.623 Got JSON-RPC error response 00:15:25.623 response: 00:15:25.623 { 00:15:25.623 "code": -32602, 00:15:25.623 "message": "Invalid cntlid range [1-65520]" 00:15:25.623 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:25.623 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2547 -i 6 -I 5 00:15:25.623 [2024-11-19 09:33:12.319651] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2547: invalid cntlid range [6-5] 00:15:25.623 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:25.623 { 00:15:25.623 "nqn": "nqn.2016-06.io.spdk:cnode2547", 00:15:25.623 "min_cntlid": 6, 00:15:25.623 "max_cntlid": 5, 00:15:25.623 "method": "nvmf_create_subsystem", 00:15:25.623 "req_id": 1 00:15:25.623 } 00:15:25.623 Got JSON-RPC error response 00:15:25.623 response: 00:15:25.623 { 00:15:25.623 "code": -32602, 00:15:25.623 "message": "Invalid cntlid range [6-5]" 00:15:25.623 }' 00:15:25.623 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:25.623 { 00:15:25.623 "nqn": "nqn.2016-06.io.spdk:cnode2547", 00:15:25.623 "min_cntlid": 6, 00:15:25.623 "max_cntlid": 5, 00:15:25.623 "method": "nvmf_create_subsystem", 00:15:25.623 "req_id": 1 00:15:25.623 } 00:15:25.623 Got JSON-RPC error response 00:15:25.623 response: 00:15:25.623 { 00:15:25.623 "code": -32602, 00:15:25.623 "message": "Invalid cntlid range [6-5]" 00:15:25.623 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:25.623 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:25.884 { 00:15:25.884 "name": "foobar", 00:15:25.884 "method": "nvmf_delete_target", 00:15:25.884 "req_id": 1 00:15:25.884 } 00:15:25.884 Got JSON-RPC error response 00:15:25.884 response: 00:15:25.884 { 00:15:25.884 "code": -32602, 00:15:25.884 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:25.884 }' 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:25.884 { 00:15:25.884 "name": "foobar", 00:15:25.884 "method": "nvmf_delete_target", 00:15:25.884 "req_id": 1 00:15:25.884 } 00:15:25.884 Got JSON-RPC error response 00:15:25.884 response: 00:15:25.884 { 00:15:25.884 "code": -32602, 00:15:25.884 "message": "The specified target doesn't exist, cannot delete it." 00:15:25.884 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:25.884 rmmod nvme_tcp 00:15:25.884 rmmod nvme_fabrics 00:15:25.884 rmmod nvme_keyring 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 259149 ']' 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 259149 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 259149 ']' 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 259149 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:15:25.884 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.885 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 259149 00:15:25.885 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:25.885 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:25.885 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 259149' 00:15:25.885 killing process with pid 259149 00:15:25.885 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 259149 00:15:25.885 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 259149 00:15:26.145 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:26.145 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:26.145 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:26.145 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:15:26.145 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:26.145 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:15:26.145 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:15:26.146 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:26.146 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:26.146 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.146 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.146 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.062 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:28.062 00:15:28.062 real 0m14.250s 00:15:28.062 user 0m21.589s 00:15:28.062 sys 0m6.675s 00:15:28.062 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.062 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:28.062 ************************************ 00:15:28.062 END TEST nvmf_invalid 00:15:28.062 ************************************ 00:15:28.324 09:33:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:28.324 09:33:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:28.324 09:33:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.324 09:33:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:28.324 ************************************ 00:15:28.324 START TEST nvmf_connect_stress 00:15:28.324 ************************************ 00:15:28.324 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:28.324 * Looking for test storage... 00:15:28.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:28.324 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:28.324 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:15:28.324 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:28.324 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:28.324 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.324 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.324 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.324 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.324 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.324 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.324 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.324 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.324 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:28.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.325 --rc genhtml_branch_coverage=1 00:15:28.325 --rc genhtml_function_coverage=1 00:15:28.325 --rc genhtml_legend=1 00:15:28.325 --rc geninfo_all_blocks=1 00:15:28.325 --rc geninfo_unexecuted_blocks=1 00:15:28.325 00:15:28.325 ' 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:28.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.325 --rc genhtml_branch_coverage=1 00:15:28.325 --rc genhtml_function_coverage=1 00:15:28.325 --rc genhtml_legend=1 00:15:28.325 --rc geninfo_all_blocks=1 00:15:28.325 --rc geninfo_unexecuted_blocks=1 00:15:28.325 00:15:28.325 ' 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:28.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.325 --rc genhtml_branch_coverage=1 00:15:28.325 --rc genhtml_function_coverage=1 00:15:28.325 --rc genhtml_legend=1 00:15:28.325 --rc geninfo_all_blocks=1 00:15:28.325 --rc geninfo_unexecuted_blocks=1 00:15:28.325 00:15:28.325 ' 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:28.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.325 --rc genhtml_branch_coverage=1 00:15:28.325 --rc genhtml_function_coverage=1 00:15:28.325 --rc genhtml_legend=1 00:15:28.325 --rc geninfo_all_blocks=1 00:15:28.325 --rc geninfo_unexecuted_blocks=1 00:15:28.325 00:15:28.325 ' 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.325 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.587 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:28.587 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:28.587 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.587 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.587 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:28.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:15:28.588 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.734 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.734 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:15:36.734 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:36.735 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:36.735 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:36.735 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:36.735 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.735 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:36.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:15:36.736 00:15:36.736 --- 10.0.0.2 ping statistics --- 00:15:36.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.736 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:15:36.736 00:15:36.736 --- 10.0.0.1 ping statistics --- 00:15:36.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.736 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=264801 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 264801 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 264801 ']' 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.736 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.736 [2024-11-19 09:33:22.588887] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:36.736 [2024-11-19 09:33:22.588956] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.736 [2024-11-19 09:33:22.688850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:36.736 [2024-11-19 09:33:22.741038] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.736 [2024-11-19 09:33:22.741095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.736 [2024-11-19 09:33:22.741108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.736 [2024-11-19 09:33:22.741115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.736 [2024-11-19 09:33:22.741122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.736 [2024-11-19 09:33:22.742976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.736 [2024-11-19 09:33:22.743143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.736 [2024-11-19 09:33:22.743144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.736 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.736 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:15:36.736 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:36.736 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:36.736 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.736 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.736 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:36.736 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.736 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.736 [2024-11-19 09:33:23.466079] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.736 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.736 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:36.736 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.736 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.998 [2024-11-19 09:33:23.491803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.998 NULL1 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=265119 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.998 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.999 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.260 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.260 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:37.260 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.260 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.260 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.836 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.836 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:37.836 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.836 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.836 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.098 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.098 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:38.098 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.098 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.098 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.359 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.359 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:38.359 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.359 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.359 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.620 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.620 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:38.620 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.620 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.620 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.881 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.881 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:38.881 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.881 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.881 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.453 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.453 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:39.453 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.453 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.453 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.714 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.714 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:39.714 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.714 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.714 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.976 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.976 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:39.976 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.976 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.976 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.237 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.237 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:40.237 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.237 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.237 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.498 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.498 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:40.498 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.498 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.498 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.068 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.068 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:41.068 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.068 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.068 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.329 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.330 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:41.330 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.330 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.330 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.590 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.590 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:41.590 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.590 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.590 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.852 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.852 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:41.852 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.852 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.852 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.113 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.113 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:42.113 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.113 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.113 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.683 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.683 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:42.683 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.683 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.683 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.944 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.944 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:42.944 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.944 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.944 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.205 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.205 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:43.205 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.205 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.205 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.466 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.466 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:43.466 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.466 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.466 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.726 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.726 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:43.726 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.726 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.726 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.297 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.297 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:44.297 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.297 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.297 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.558 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.558 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:44.558 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.559 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.559 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.820 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.820 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:44.820 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.820 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.820 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.082 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.082 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:45.082 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.082 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.082 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.342 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.342 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:45.342 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.342 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.342 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.914 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.914 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:45.914 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.914 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.914 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.174 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.174 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:46.174 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.174 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.174 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.434 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.434 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:46.434 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.434 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.434 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.694 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.694 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:46.694 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.694 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.694 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.955 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:47.215 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.215 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 265119 00:15:47.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (265119) - No such process 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 265119 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:47.216 rmmod nvme_tcp 00:15:47.216 rmmod nvme_fabrics 00:15:47.216 rmmod nvme_keyring 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 264801 ']' 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 264801 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 264801 ']' 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 264801 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 264801 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 264801' 00:15:47.216 killing process with pid 264801 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 264801 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 264801 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.216 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:49.766 00:15:49.766 real 0m21.175s 00:15:49.766 user 0m43.779s 00:15:49.766 sys 0m7.851s 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.766 ************************************ 00:15:49.766 END TEST nvmf_connect_stress 00:15:49.766 ************************************ 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:49.766 ************************************ 00:15:49.766 START TEST nvmf_fused_ordering 00:15:49.766 ************************************ 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:49.766 * Looking for test storage... 00:15:49.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:49.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.766 --rc genhtml_branch_coverage=1 00:15:49.766 --rc genhtml_function_coverage=1 00:15:49.766 --rc genhtml_legend=1 00:15:49.766 --rc geninfo_all_blocks=1 00:15:49.766 --rc geninfo_unexecuted_blocks=1 00:15:49.766 00:15:49.766 ' 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:49.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.766 --rc genhtml_branch_coverage=1 00:15:49.766 --rc genhtml_function_coverage=1 00:15:49.766 --rc genhtml_legend=1 00:15:49.766 --rc geninfo_all_blocks=1 00:15:49.766 --rc geninfo_unexecuted_blocks=1 00:15:49.766 00:15:49.766 ' 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:49.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.766 --rc genhtml_branch_coverage=1 00:15:49.766 --rc genhtml_function_coverage=1 00:15:49.766 --rc genhtml_legend=1 00:15:49.766 --rc geninfo_all_blocks=1 00:15:49.766 --rc geninfo_unexecuted_blocks=1 00:15:49.766 00:15:49.766 ' 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:49.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.766 --rc genhtml_branch_coverage=1 00:15:49.766 --rc genhtml_function_coverage=1 00:15:49.766 --rc genhtml_legend=1 00:15:49.766 --rc geninfo_all_blocks=1 00:15:49.766 --rc geninfo_unexecuted_blocks=1 00:15:49.766 00:15:49.766 ' 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.766 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:49.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:15:49.767 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.917 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:57.917 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:57.917 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:57.917 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:57.917 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:57.918 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:57.918 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:57.918 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:57.918 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:57.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:15:57.918 00:15:57.918 --- 10.0.0.2 ping statistics --- 00:15:57.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.918 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:15:57.918 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:57.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:15:57.918 00:15:57.918 --- 10.0.0.1 ping statistics --- 00:15:57.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.918 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=271284 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 271284 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 271284 ']' 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.919 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:57.919 [2024-11-19 09:33:43.878954] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:57.919 [2024-11-19 09:33:43.879022] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.919 [2024-11-19 09:33:43.978938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.919 [2024-11-19 09:33:44.030350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.919 [2024-11-19 09:33:44.030398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.919 [2024-11-19 09:33:44.030407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.919 [2024-11-19 09:33:44.030414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.919 [2024-11-19 09:33:44.030420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.919 [2024-11-19 09:33:44.031156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.181 [2024-11-19 09:33:44.740973] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.181 [2024-11-19 09:33:44.765256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.181 NULL1 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.181 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:58.181 [2024-11-19 09:33:44.833809] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:58.181 [2024-11-19 09:33:44.833856] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid271518 ] 00:15:59.126 Attached to nqn.2016-06.io.spdk:cnode1 00:15:59.126 Namespace ID: 1 size: 1GB 00:15:59.126 fused_ordering(0) 00:15:59.126 fused_ordering(1) 00:15:59.126 fused_ordering(2) 00:15:59.126 fused_ordering(3) 00:15:59.126 fused_ordering(4) 00:15:59.126 fused_ordering(5) 00:15:59.126 fused_ordering(6) 00:15:59.126 fused_ordering(7) 00:15:59.126 fused_ordering(8) 00:15:59.126 fused_ordering(9) 00:15:59.126 fused_ordering(10) 00:15:59.126 fused_ordering(11) 00:15:59.127 fused_ordering(12) 00:15:59.127 fused_ordering(13) 00:15:59.127 fused_ordering(14) 00:15:59.127 fused_ordering(15) 00:15:59.127 fused_ordering(16) 00:15:59.127 fused_ordering(17) 00:15:59.127 fused_ordering(18) 00:15:59.127 fused_ordering(19) 00:15:59.127 fused_ordering(20) 00:15:59.127 fused_ordering(21) 00:15:59.127 fused_ordering(22) 00:15:59.127 fused_ordering(23) 00:15:59.127 fused_ordering(24) 00:15:59.127 fused_ordering(25) 00:15:59.127 fused_ordering(26) 00:15:59.127 fused_ordering(27) 00:15:59.127 fused_ordering(28) 00:15:59.127 fused_ordering(29) 00:15:59.127 fused_ordering(30) 00:15:59.127 fused_ordering(31) 00:15:59.127 fused_ordering(32) 00:15:59.127 fused_ordering(33) 00:15:59.127 fused_ordering(34) 00:15:59.127 fused_ordering(35) 00:15:59.127 fused_ordering(36) 00:15:59.127 fused_ordering(37) 00:15:59.127 fused_ordering(38) 00:15:59.127 fused_ordering(39) 00:15:59.127 fused_ordering(40) 00:15:59.127 fused_ordering(41) 00:15:59.127 fused_ordering(42) 00:15:59.127 fused_ordering(43) 00:15:59.127 fused_ordering(44) 00:15:59.127 fused_ordering(45) 00:15:59.127 fused_ordering(46) 00:15:59.127 fused_ordering(47) 00:15:59.127 fused_ordering(48) 00:15:59.127 fused_ordering(49) 00:15:59.127 fused_ordering(50) 00:15:59.127 fused_ordering(51) 00:15:59.127 fused_ordering(52) 00:15:59.127 fused_ordering(53) 00:15:59.127 fused_ordering(54) 00:15:59.127 fused_ordering(55) 00:15:59.127 fused_ordering(56) 00:15:59.127 fused_ordering(57) 00:15:59.127 fused_ordering(58) 00:15:59.127 fused_ordering(59) 00:15:59.127 fused_ordering(60) 00:15:59.127 fused_ordering(61) 00:15:59.127 fused_ordering(62) 00:15:59.127 fused_ordering(63) 00:15:59.127 fused_ordering(64) 00:15:59.127 fused_ordering(65) 00:15:59.127 fused_ordering(66) 00:15:59.127 fused_ordering(67) 00:15:59.127 fused_ordering(68) 00:15:59.127 fused_ordering(69) 00:15:59.127 fused_ordering(70) 00:15:59.127 fused_ordering(71) 00:15:59.127 fused_ordering(72) 00:15:59.127 fused_ordering(73) 00:15:59.127 fused_ordering(74) 00:15:59.127 fused_ordering(75) 00:15:59.127 fused_ordering(76) 00:15:59.127 fused_ordering(77) 00:15:59.127 fused_ordering(78) 00:15:59.127 fused_ordering(79) 00:15:59.127 fused_ordering(80) 00:15:59.127 fused_ordering(81) 00:15:59.127 fused_ordering(82) 00:15:59.127 fused_ordering(83) 00:15:59.127 fused_ordering(84) 00:15:59.127 fused_ordering(85) 00:15:59.127 fused_ordering(86) 00:15:59.127 fused_ordering(87) 00:15:59.127 fused_ordering(88) 00:15:59.127 fused_ordering(89) 00:15:59.127 fused_ordering(90) 00:15:59.127 fused_ordering(91) 00:15:59.127 fused_ordering(92) 00:15:59.127 fused_ordering(93) 00:15:59.127 fused_ordering(94) 00:15:59.127 fused_ordering(95) 00:15:59.127 fused_ordering(96) 00:15:59.127 fused_ordering(97) 00:15:59.127 fused_ordering(98) 00:15:59.127 fused_ordering(99) 00:15:59.127 fused_ordering(100) 00:15:59.127 fused_ordering(101) 00:15:59.127 fused_ordering(102) 00:15:59.127 fused_ordering(103) 00:15:59.127 fused_ordering(104) 00:15:59.127 fused_ordering(105) 00:15:59.127 fused_ordering(106) 00:15:59.127 fused_ordering(107) 00:15:59.127 fused_ordering(108) 00:15:59.127 fused_ordering(109) 00:15:59.127 fused_ordering(110) 00:15:59.127 fused_ordering(111) 00:15:59.127 fused_ordering(112) 00:15:59.127 fused_ordering(113) 00:15:59.127 fused_ordering(114) 00:15:59.127 fused_ordering(115) 00:15:59.127 fused_ordering(116) 00:15:59.127 fused_ordering(117) 00:15:59.127 fused_ordering(118) 00:15:59.127 fused_ordering(119) 00:15:59.127 fused_ordering(120) 00:15:59.127 fused_ordering(121) 00:15:59.127 fused_ordering(122) 00:15:59.127 fused_ordering(123) 00:15:59.127 fused_ordering(124) 00:15:59.127 fused_ordering(125) 00:15:59.127 fused_ordering(126) 00:15:59.127 fused_ordering(127) 00:15:59.127 fused_ordering(128) 00:15:59.127 fused_ordering(129) 00:15:59.127 fused_ordering(130) 00:15:59.127 fused_ordering(131) 00:15:59.127 fused_ordering(132) 00:15:59.127 fused_ordering(133) 00:15:59.127 fused_ordering(134) 00:15:59.127 fused_ordering(135) 00:15:59.127 fused_ordering(136) 00:15:59.127 fused_ordering(137) 00:15:59.127 fused_ordering(138) 00:15:59.127 fused_ordering(139) 00:15:59.127 fused_ordering(140) 00:15:59.127 fused_ordering(141) 00:15:59.127 fused_ordering(142) 00:15:59.127 fused_ordering(143) 00:15:59.127 fused_ordering(144) 00:15:59.127 fused_ordering(145) 00:15:59.127 fused_ordering(146) 00:15:59.127 fused_ordering(147) 00:15:59.127 fused_ordering(148) 00:15:59.127 fused_ordering(149) 00:15:59.127 fused_ordering(150) 00:15:59.127 fused_ordering(151) 00:15:59.127 fused_ordering(152) 00:15:59.127 fused_ordering(153) 00:15:59.127 fused_ordering(154) 00:15:59.127 fused_ordering(155) 00:15:59.127 fused_ordering(156) 00:15:59.127 fused_ordering(157) 00:15:59.127 fused_ordering(158) 00:15:59.127 fused_ordering(159) 00:15:59.127 fused_ordering(160) 00:15:59.127 fused_ordering(161) 00:15:59.127 fused_ordering(162) 00:15:59.127 fused_ordering(163) 00:15:59.127 fused_ordering(164) 00:15:59.127 fused_ordering(165) 00:15:59.127 fused_ordering(166) 00:15:59.127 fused_ordering(167) 00:15:59.127 fused_ordering(168) 00:15:59.127 fused_ordering(169) 00:15:59.127 fused_ordering(170) 00:15:59.127 fused_ordering(171) 00:15:59.127 fused_ordering(172) 00:15:59.127 fused_ordering(173) 00:15:59.127 fused_ordering(174) 00:15:59.127 fused_ordering(175) 00:15:59.127 fused_ordering(176) 00:15:59.127 fused_ordering(177) 00:15:59.127 fused_ordering(178) 00:15:59.127 fused_ordering(179) 00:15:59.127 fused_ordering(180) 00:15:59.127 fused_ordering(181) 00:15:59.127 fused_ordering(182) 00:15:59.127 fused_ordering(183) 00:15:59.127 fused_ordering(184) 00:15:59.127 fused_ordering(185) 00:15:59.127 fused_ordering(186) 00:15:59.127 fused_ordering(187) 00:15:59.127 fused_ordering(188) 00:15:59.127 fused_ordering(189) 00:15:59.127 fused_ordering(190) 00:15:59.127 fused_ordering(191) 00:15:59.127 fused_ordering(192) 00:15:59.127 fused_ordering(193) 00:15:59.127 fused_ordering(194) 00:15:59.127 fused_ordering(195) 00:15:59.127 fused_ordering(196) 00:15:59.127 fused_ordering(197) 00:15:59.127 fused_ordering(198) 00:15:59.127 fused_ordering(199) 00:15:59.127 fused_ordering(200) 00:15:59.127 fused_ordering(201) 00:15:59.127 fused_ordering(202) 00:15:59.127 fused_ordering(203) 00:15:59.127 fused_ordering(204) 00:15:59.127 fused_ordering(205) 00:15:59.389 fused_ordering(206) 00:15:59.389 fused_ordering(207) 00:15:59.389 fused_ordering(208) 00:15:59.389 fused_ordering(209) 00:15:59.389 fused_ordering(210) 00:15:59.389 fused_ordering(211) 00:15:59.389 fused_ordering(212) 00:15:59.389 fused_ordering(213) 00:15:59.389 fused_ordering(214) 00:15:59.389 fused_ordering(215) 00:15:59.389 fused_ordering(216) 00:15:59.389 fused_ordering(217) 00:15:59.389 fused_ordering(218) 00:15:59.389 fused_ordering(219) 00:15:59.389 fused_ordering(220) 00:15:59.389 fused_ordering(221) 00:15:59.389 fused_ordering(222) 00:15:59.389 fused_ordering(223) 00:15:59.389 fused_ordering(224) 00:15:59.389 fused_ordering(225) 00:15:59.389 fused_ordering(226) 00:15:59.389 fused_ordering(227) 00:15:59.389 fused_ordering(228) 00:15:59.389 fused_ordering(229) 00:15:59.389 fused_ordering(230) 00:15:59.389 fused_ordering(231) 00:15:59.389 fused_ordering(232) 00:15:59.389 fused_ordering(233) 00:15:59.389 fused_ordering(234) 00:15:59.389 fused_ordering(235) 00:15:59.389 fused_ordering(236) 00:15:59.389 fused_ordering(237) 00:15:59.389 fused_ordering(238) 00:15:59.389 fused_ordering(239) 00:15:59.389 fused_ordering(240) 00:15:59.389 fused_ordering(241) 00:15:59.389 fused_ordering(242) 00:15:59.389 fused_ordering(243) 00:15:59.389 fused_ordering(244) 00:15:59.389 fused_ordering(245) 00:15:59.389 fused_ordering(246) 00:15:59.389 fused_ordering(247) 00:15:59.389 fused_ordering(248) 00:15:59.389 fused_ordering(249) 00:15:59.389 fused_ordering(250) 00:15:59.389 fused_ordering(251) 00:15:59.389 fused_ordering(252) 00:15:59.389 fused_ordering(253) 00:15:59.389 fused_ordering(254) 00:15:59.389 fused_ordering(255) 00:15:59.389 fused_ordering(256) 00:15:59.389 fused_ordering(257) 00:15:59.389 fused_ordering(258) 00:15:59.389 fused_ordering(259) 00:15:59.389 fused_ordering(260) 00:15:59.389 fused_ordering(261) 00:15:59.389 fused_ordering(262) 00:15:59.389 fused_ordering(263) 00:15:59.389 fused_ordering(264) 00:15:59.389 fused_ordering(265) 00:15:59.389 fused_ordering(266) 00:15:59.389 fused_ordering(267) 00:15:59.389 fused_ordering(268) 00:15:59.389 fused_ordering(269) 00:15:59.389 fused_ordering(270) 00:15:59.389 fused_ordering(271) 00:15:59.389 fused_ordering(272) 00:15:59.389 fused_ordering(273) 00:15:59.389 fused_ordering(274) 00:15:59.389 fused_ordering(275) 00:15:59.389 fused_ordering(276) 00:15:59.389 fused_ordering(277) 00:15:59.389 fused_ordering(278) 00:15:59.389 fused_ordering(279) 00:15:59.389 fused_ordering(280) 00:15:59.389 fused_ordering(281) 00:15:59.389 fused_ordering(282) 00:15:59.389 fused_ordering(283) 00:15:59.389 fused_ordering(284) 00:15:59.389 fused_ordering(285) 00:15:59.389 fused_ordering(286) 00:15:59.389 fused_ordering(287) 00:15:59.389 fused_ordering(288) 00:15:59.389 fused_ordering(289) 00:15:59.389 fused_ordering(290) 00:15:59.389 fused_ordering(291) 00:15:59.389 fused_ordering(292) 00:15:59.389 fused_ordering(293) 00:15:59.389 fused_ordering(294) 00:15:59.389 fused_ordering(295) 00:15:59.389 fused_ordering(296) 00:15:59.389 fused_ordering(297) 00:15:59.389 fused_ordering(298) 00:15:59.389 fused_ordering(299) 00:15:59.389 fused_ordering(300) 00:15:59.389 fused_ordering(301) 00:15:59.389 fused_ordering(302) 00:15:59.389 fused_ordering(303) 00:15:59.389 fused_ordering(304) 00:15:59.389 fused_ordering(305) 00:15:59.389 fused_ordering(306) 00:15:59.389 fused_ordering(307) 00:15:59.389 fused_ordering(308) 00:15:59.390 fused_ordering(309) 00:15:59.390 fused_ordering(310) 00:15:59.390 fused_ordering(311) 00:15:59.390 fused_ordering(312) 00:15:59.390 fused_ordering(313) 00:15:59.390 fused_ordering(314) 00:15:59.390 fused_ordering(315) 00:15:59.390 fused_ordering(316) 00:15:59.390 fused_ordering(317) 00:15:59.390 fused_ordering(318) 00:15:59.390 fused_ordering(319) 00:15:59.390 fused_ordering(320) 00:15:59.390 fused_ordering(321) 00:15:59.390 fused_ordering(322) 00:15:59.390 fused_ordering(323) 00:15:59.390 fused_ordering(324) 00:15:59.390 fused_ordering(325) 00:15:59.390 fused_ordering(326) 00:15:59.390 fused_ordering(327) 00:15:59.390 fused_ordering(328) 00:15:59.390 fused_ordering(329) 00:15:59.390 fused_ordering(330) 00:15:59.390 fused_ordering(331) 00:15:59.390 fused_ordering(332) 00:15:59.390 fused_ordering(333) 00:15:59.390 fused_ordering(334) 00:15:59.390 fused_ordering(335) 00:15:59.390 fused_ordering(336) 00:15:59.390 fused_ordering(337) 00:15:59.390 fused_ordering(338) 00:15:59.390 fused_ordering(339) 00:15:59.390 fused_ordering(340) 00:15:59.390 fused_ordering(341) 00:15:59.390 fused_ordering(342) 00:15:59.390 fused_ordering(343) 00:15:59.390 fused_ordering(344) 00:15:59.390 fused_ordering(345) 00:15:59.390 fused_ordering(346) 00:15:59.390 fused_ordering(347) 00:15:59.390 fused_ordering(348) 00:15:59.390 fused_ordering(349) 00:15:59.390 fused_ordering(350) 00:15:59.390 fused_ordering(351) 00:15:59.390 fused_ordering(352) 00:15:59.390 fused_ordering(353) 00:15:59.390 fused_ordering(354) 00:15:59.390 fused_ordering(355) 00:15:59.390 fused_ordering(356) 00:15:59.390 fused_ordering(357) 00:15:59.390 fused_ordering(358) 00:15:59.390 fused_ordering(359) 00:15:59.390 fused_ordering(360) 00:15:59.390 fused_ordering(361) 00:15:59.390 fused_ordering(362) 00:15:59.390 fused_ordering(363) 00:15:59.390 fused_ordering(364) 00:15:59.390 fused_ordering(365) 00:15:59.390 fused_ordering(366) 00:15:59.390 fused_ordering(367) 00:15:59.390 fused_ordering(368) 00:15:59.390 fused_ordering(369) 00:15:59.390 fused_ordering(370) 00:15:59.390 fused_ordering(371) 00:15:59.390 fused_ordering(372) 00:15:59.390 fused_ordering(373) 00:15:59.390 fused_ordering(374) 00:15:59.390 fused_ordering(375) 00:15:59.390 fused_ordering(376) 00:15:59.390 fused_ordering(377) 00:15:59.390 fused_ordering(378) 00:15:59.390 fused_ordering(379) 00:15:59.390 fused_ordering(380) 00:15:59.390 fused_ordering(381) 00:15:59.390 fused_ordering(382) 00:15:59.390 fused_ordering(383) 00:15:59.390 fused_ordering(384) 00:15:59.390 fused_ordering(385) 00:15:59.390 fused_ordering(386) 00:15:59.390 fused_ordering(387) 00:15:59.390 fused_ordering(388) 00:15:59.390 fused_ordering(389) 00:15:59.390 fused_ordering(390) 00:15:59.390 fused_ordering(391) 00:15:59.390 fused_ordering(392) 00:15:59.390 fused_ordering(393) 00:15:59.390 fused_ordering(394) 00:15:59.390 fused_ordering(395) 00:15:59.390 fused_ordering(396) 00:15:59.390 fused_ordering(397) 00:15:59.390 fused_ordering(398) 00:15:59.390 fused_ordering(399) 00:15:59.390 fused_ordering(400) 00:15:59.390 fused_ordering(401) 00:15:59.390 fused_ordering(402) 00:15:59.390 fused_ordering(403) 00:15:59.390 fused_ordering(404) 00:15:59.390 fused_ordering(405) 00:15:59.390 fused_ordering(406) 00:15:59.390 fused_ordering(407) 00:15:59.390 fused_ordering(408) 00:15:59.390 fused_ordering(409) 00:15:59.390 fused_ordering(410) 00:15:59.964 fused_ordering(411) 00:15:59.964 fused_ordering(412) 00:15:59.964 fused_ordering(413) 00:15:59.964 fused_ordering(414) 00:15:59.964 fused_ordering(415) 00:15:59.964 fused_ordering(416) 00:15:59.964 fused_ordering(417) 00:15:59.964 fused_ordering(418) 00:15:59.964 fused_ordering(419) 00:15:59.964 fused_ordering(420) 00:15:59.964 fused_ordering(421) 00:15:59.964 fused_ordering(422) 00:15:59.964 fused_ordering(423) 00:15:59.964 fused_ordering(424) 00:15:59.964 fused_ordering(425) 00:15:59.964 fused_ordering(426) 00:15:59.964 fused_ordering(427) 00:15:59.964 fused_ordering(428) 00:15:59.964 fused_ordering(429) 00:15:59.964 fused_ordering(430) 00:15:59.964 fused_ordering(431) 00:15:59.964 fused_ordering(432) 00:15:59.964 fused_ordering(433) 00:15:59.964 fused_ordering(434) 00:15:59.964 fused_ordering(435) 00:15:59.964 fused_ordering(436) 00:15:59.964 fused_ordering(437) 00:15:59.964 fused_ordering(438) 00:15:59.964 fused_ordering(439) 00:15:59.964 fused_ordering(440) 00:15:59.964 fused_ordering(441) 00:15:59.964 fused_ordering(442) 00:15:59.964 fused_ordering(443) 00:15:59.964 fused_ordering(444) 00:15:59.964 fused_ordering(445) 00:15:59.964 fused_ordering(446) 00:15:59.964 fused_ordering(447) 00:15:59.964 fused_ordering(448) 00:15:59.964 fused_ordering(449) 00:15:59.964 fused_ordering(450) 00:15:59.964 fused_ordering(451) 00:15:59.964 fused_ordering(452) 00:15:59.964 fused_ordering(453) 00:15:59.964 fused_ordering(454) 00:15:59.964 fused_ordering(455) 00:15:59.964 fused_ordering(456) 00:15:59.964 fused_ordering(457) 00:15:59.964 fused_ordering(458) 00:15:59.964 fused_ordering(459) 00:15:59.964 fused_ordering(460) 00:15:59.964 fused_ordering(461) 00:15:59.964 fused_ordering(462) 00:15:59.964 fused_ordering(463) 00:15:59.964 fused_ordering(464) 00:15:59.964 fused_ordering(465) 00:15:59.964 fused_ordering(466) 00:15:59.964 fused_ordering(467) 00:15:59.964 fused_ordering(468) 00:15:59.964 fused_ordering(469) 00:15:59.964 fused_ordering(470) 00:15:59.964 fused_ordering(471) 00:15:59.964 fused_ordering(472) 00:15:59.964 fused_ordering(473) 00:15:59.964 fused_ordering(474) 00:15:59.964 fused_ordering(475) 00:15:59.964 fused_ordering(476) 00:15:59.964 fused_ordering(477) 00:15:59.964 fused_ordering(478) 00:15:59.964 fused_ordering(479) 00:15:59.964 fused_ordering(480) 00:15:59.964 fused_ordering(481) 00:15:59.964 fused_ordering(482) 00:15:59.964 fused_ordering(483) 00:15:59.964 fused_ordering(484) 00:15:59.964 fused_ordering(485) 00:15:59.964 fused_ordering(486) 00:15:59.964 fused_ordering(487) 00:15:59.964 fused_ordering(488) 00:15:59.964 fused_ordering(489) 00:15:59.964 fused_ordering(490) 00:15:59.964 fused_ordering(491) 00:15:59.964 fused_ordering(492) 00:15:59.964 fused_ordering(493) 00:15:59.964 fused_ordering(494) 00:15:59.964 fused_ordering(495) 00:15:59.964 fused_ordering(496) 00:15:59.964 fused_ordering(497) 00:15:59.964 fused_ordering(498) 00:15:59.964 fused_ordering(499) 00:15:59.964 fused_ordering(500) 00:15:59.964 fused_ordering(501) 00:15:59.964 fused_ordering(502) 00:15:59.964 fused_ordering(503) 00:15:59.964 fused_ordering(504) 00:15:59.964 fused_ordering(505) 00:15:59.964 fused_ordering(506) 00:15:59.964 fused_ordering(507) 00:15:59.964 fused_ordering(508) 00:15:59.964 fused_ordering(509) 00:15:59.964 fused_ordering(510) 00:15:59.964 fused_ordering(511) 00:15:59.964 fused_ordering(512) 00:15:59.964 fused_ordering(513) 00:15:59.964 fused_ordering(514) 00:15:59.964 fused_ordering(515) 00:15:59.964 fused_ordering(516) 00:15:59.964 fused_ordering(517) 00:15:59.964 fused_ordering(518) 00:15:59.964 fused_ordering(519) 00:15:59.964 fused_ordering(520) 00:15:59.964 fused_ordering(521) 00:15:59.964 fused_ordering(522) 00:15:59.964 fused_ordering(523) 00:15:59.964 fused_ordering(524) 00:15:59.964 fused_ordering(525) 00:15:59.964 fused_ordering(526) 00:15:59.964 fused_ordering(527) 00:15:59.964 fused_ordering(528) 00:15:59.964 fused_ordering(529) 00:15:59.964 fused_ordering(530) 00:15:59.964 fused_ordering(531) 00:15:59.964 fused_ordering(532) 00:15:59.964 fused_ordering(533) 00:15:59.964 fused_ordering(534) 00:15:59.964 fused_ordering(535) 00:15:59.964 fused_ordering(536) 00:15:59.964 fused_ordering(537) 00:15:59.964 fused_ordering(538) 00:15:59.964 fused_ordering(539) 00:15:59.964 fused_ordering(540) 00:15:59.964 fused_ordering(541) 00:15:59.964 fused_ordering(542) 00:15:59.964 fused_ordering(543) 00:15:59.964 fused_ordering(544) 00:15:59.964 fused_ordering(545) 00:15:59.964 fused_ordering(546) 00:15:59.964 fused_ordering(547) 00:15:59.964 fused_ordering(548) 00:15:59.965 fused_ordering(549) 00:15:59.965 fused_ordering(550) 00:15:59.965 fused_ordering(551) 00:15:59.965 fused_ordering(552) 00:15:59.965 fused_ordering(553) 00:15:59.965 fused_ordering(554) 00:15:59.965 fused_ordering(555) 00:15:59.965 fused_ordering(556) 00:15:59.965 fused_ordering(557) 00:15:59.965 fused_ordering(558) 00:15:59.965 fused_ordering(559) 00:15:59.965 fused_ordering(560) 00:15:59.965 fused_ordering(561) 00:15:59.965 fused_ordering(562) 00:15:59.965 fused_ordering(563) 00:15:59.965 fused_ordering(564) 00:15:59.965 fused_ordering(565) 00:15:59.965 fused_ordering(566) 00:15:59.965 fused_ordering(567) 00:15:59.965 fused_ordering(568) 00:15:59.965 fused_ordering(569) 00:15:59.965 fused_ordering(570) 00:15:59.965 fused_ordering(571) 00:15:59.965 fused_ordering(572) 00:15:59.965 fused_ordering(573) 00:15:59.965 fused_ordering(574) 00:15:59.965 fused_ordering(575) 00:15:59.965 fused_ordering(576) 00:15:59.965 fused_ordering(577) 00:15:59.965 fused_ordering(578) 00:15:59.965 fused_ordering(579) 00:15:59.965 fused_ordering(580) 00:15:59.965 fused_ordering(581) 00:15:59.965 fused_ordering(582) 00:15:59.965 fused_ordering(583) 00:15:59.965 fused_ordering(584) 00:15:59.965 fused_ordering(585) 00:15:59.965 fused_ordering(586) 00:15:59.965 fused_ordering(587) 00:15:59.965 fused_ordering(588) 00:15:59.965 fused_ordering(589) 00:15:59.965 fused_ordering(590) 00:15:59.965 fused_ordering(591) 00:15:59.965 fused_ordering(592) 00:15:59.965 fused_ordering(593) 00:15:59.965 fused_ordering(594) 00:15:59.965 fused_ordering(595) 00:15:59.965 fused_ordering(596) 00:15:59.965 fused_ordering(597) 00:15:59.965 fused_ordering(598) 00:15:59.965 fused_ordering(599) 00:15:59.965 fused_ordering(600) 00:15:59.965 fused_ordering(601) 00:15:59.965 fused_ordering(602) 00:15:59.965 fused_ordering(603) 00:15:59.965 fused_ordering(604) 00:15:59.965 fused_ordering(605) 00:15:59.965 fused_ordering(606) 00:15:59.965 fused_ordering(607) 00:15:59.965 fused_ordering(608) 00:15:59.965 fused_ordering(609) 00:15:59.965 fused_ordering(610) 00:15:59.965 fused_ordering(611) 00:15:59.965 fused_ordering(612) 00:15:59.965 fused_ordering(613) 00:15:59.965 fused_ordering(614) 00:15:59.965 fused_ordering(615) 00:16:00.538 fused_ordering(616) 00:16:00.538 fused_ordering(617) 00:16:00.538 fused_ordering(618) 00:16:00.538 fused_ordering(619) 00:16:00.538 fused_ordering(620) 00:16:00.538 fused_ordering(621) 00:16:00.538 fused_ordering(622) 00:16:00.538 fused_ordering(623) 00:16:00.538 fused_ordering(624) 00:16:00.538 fused_ordering(625) 00:16:00.538 fused_ordering(626) 00:16:00.538 fused_ordering(627) 00:16:00.538 fused_ordering(628) 00:16:00.538 fused_ordering(629) 00:16:00.538 fused_ordering(630) 00:16:00.538 fused_ordering(631) 00:16:00.538 fused_ordering(632) 00:16:00.538 fused_ordering(633) 00:16:00.538 fused_ordering(634) 00:16:00.538 fused_ordering(635) 00:16:00.538 fused_ordering(636) 00:16:00.538 fused_ordering(637) 00:16:00.538 fused_ordering(638) 00:16:00.538 fused_ordering(639) 00:16:00.538 fused_ordering(640) 00:16:00.538 fused_ordering(641) 00:16:00.538 fused_ordering(642) 00:16:00.538 fused_ordering(643) 00:16:00.538 fused_ordering(644) 00:16:00.538 fused_ordering(645) 00:16:00.538 fused_ordering(646) 00:16:00.538 fused_ordering(647) 00:16:00.538 fused_ordering(648) 00:16:00.538 fused_ordering(649) 00:16:00.538 fused_ordering(650) 00:16:00.538 fused_ordering(651) 00:16:00.538 fused_ordering(652) 00:16:00.538 fused_ordering(653) 00:16:00.538 fused_ordering(654) 00:16:00.538 fused_ordering(655) 00:16:00.538 fused_ordering(656) 00:16:00.538 fused_ordering(657) 00:16:00.538 fused_ordering(658) 00:16:00.538 fused_ordering(659) 00:16:00.538 fused_ordering(660) 00:16:00.538 fused_ordering(661) 00:16:00.538 fused_ordering(662) 00:16:00.538 fused_ordering(663) 00:16:00.538 fused_ordering(664) 00:16:00.538 fused_ordering(665) 00:16:00.538 fused_ordering(666) 00:16:00.538 fused_ordering(667) 00:16:00.538 fused_ordering(668) 00:16:00.538 fused_ordering(669) 00:16:00.538 fused_ordering(670) 00:16:00.538 fused_ordering(671) 00:16:00.538 fused_ordering(672) 00:16:00.538 fused_ordering(673) 00:16:00.538 fused_ordering(674) 00:16:00.538 fused_ordering(675) 00:16:00.538 fused_ordering(676) 00:16:00.538 fused_ordering(677) 00:16:00.538 fused_ordering(678) 00:16:00.538 fused_ordering(679) 00:16:00.538 fused_ordering(680) 00:16:00.538 fused_ordering(681) 00:16:00.538 fused_ordering(682) 00:16:00.538 fused_ordering(683) 00:16:00.538 fused_ordering(684) 00:16:00.538 fused_ordering(685) 00:16:00.538 fused_ordering(686) 00:16:00.538 fused_ordering(687) 00:16:00.538 fused_ordering(688) 00:16:00.538 fused_ordering(689) 00:16:00.538 fused_ordering(690) 00:16:00.538 fused_ordering(691) 00:16:00.538 fused_ordering(692) 00:16:00.538 fused_ordering(693) 00:16:00.538 fused_ordering(694) 00:16:00.538 fused_ordering(695) 00:16:00.538 fused_ordering(696) 00:16:00.538 fused_ordering(697) 00:16:00.538 fused_ordering(698) 00:16:00.538 fused_ordering(699) 00:16:00.538 fused_ordering(700) 00:16:00.538 fused_ordering(701) 00:16:00.538 fused_ordering(702) 00:16:00.538 fused_ordering(703) 00:16:00.538 fused_ordering(704) 00:16:00.538 fused_ordering(705) 00:16:00.538 fused_ordering(706) 00:16:00.538 fused_ordering(707) 00:16:00.538 fused_ordering(708) 00:16:00.538 fused_ordering(709) 00:16:00.538 fused_ordering(710) 00:16:00.538 fused_ordering(711) 00:16:00.538 fused_ordering(712) 00:16:00.538 fused_ordering(713) 00:16:00.538 fused_ordering(714) 00:16:00.538 fused_ordering(715) 00:16:00.538 fused_ordering(716) 00:16:00.538 fused_ordering(717) 00:16:00.538 fused_ordering(718) 00:16:00.538 fused_ordering(719) 00:16:00.538 fused_ordering(720) 00:16:00.538 fused_ordering(721) 00:16:00.538 fused_ordering(722) 00:16:00.538 fused_ordering(723) 00:16:00.538 fused_ordering(724) 00:16:00.538 fused_ordering(725) 00:16:00.538 fused_ordering(726) 00:16:00.538 fused_ordering(727) 00:16:00.538 fused_ordering(728) 00:16:00.538 fused_ordering(729) 00:16:00.538 fused_ordering(730) 00:16:00.538 fused_ordering(731) 00:16:00.538 fused_ordering(732) 00:16:00.538 fused_ordering(733) 00:16:00.538 fused_ordering(734) 00:16:00.538 fused_ordering(735) 00:16:00.538 fused_ordering(736) 00:16:00.538 fused_ordering(737) 00:16:00.539 fused_ordering(738) 00:16:00.539 fused_ordering(739) 00:16:00.539 fused_ordering(740) 00:16:00.539 fused_ordering(741) 00:16:00.539 fused_ordering(742) 00:16:00.539 fused_ordering(743) 00:16:00.539 fused_ordering(744) 00:16:00.539 fused_ordering(745) 00:16:00.539 fused_ordering(746) 00:16:00.539 fused_ordering(747) 00:16:00.539 fused_ordering(748) 00:16:00.539 fused_ordering(749) 00:16:00.539 fused_ordering(750) 00:16:00.539 fused_ordering(751) 00:16:00.539 fused_ordering(752) 00:16:00.539 fused_ordering(753) 00:16:00.539 fused_ordering(754) 00:16:00.539 fused_ordering(755) 00:16:00.539 fused_ordering(756) 00:16:00.539 fused_ordering(757) 00:16:00.539 fused_ordering(758) 00:16:00.539 fused_ordering(759) 00:16:00.539 fused_ordering(760) 00:16:00.539 fused_ordering(761) 00:16:00.539 fused_ordering(762) 00:16:00.539 fused_ordering(763) 00:16:00.539 fused_ordering(764) 00:16:00.539 fused_ordering(765) 00:16:00.539 fused_ordering(766) 00:16:00.539 fused_ordering(767) 00:16:00.539 fused_ordering(768) 00:16:00.539 fused_ordering(769) 00:16:00.539 fused_ordering(770) 00:16:00.539 fused_ordering(771) 00:16:00.539 fused_ordering(772) 00:16:00.539 fused_ordering(773) 00:16:00.539 fused_ordering(774) 00:16:00.539 fused_ordering(775) 00:16:00.539 fused_ordering(776) 00:16:00.539 fused_ordering(777) 00:16:00.539 fused_ordering(778) 00:16:00.539 fused_ordering(779) 00:16:00.539 fused_ordering(780) 00:16:00.539 fused_ordering(781) 00:16:00.539 fused_ordering(782) 00:16:00.539 fused_ordering(783) 00:16:00.539 fused_ordering(784) 00:16:00.539 fused_ordering(785) 00:16:00.539 fused_ordering(786) 00:16:00.539 fused_ordering(787) 00:16:00.539 fused_ordering(788) 00:16:00.539 fused_ordering(789) 00:16:00.539 fused_ordering(790) 00:16:00.539 fused_ordering(791) 00:16:00.539 fused_ordering(792) 00:16:00.539 fused_ordering(793) 00:16:00.539 fused_ordering(794) 00:16:00.539 fused_ordering(795) 00:16:00.539 fused_ordering(796) 00:16:00.539 fused_ordering(797) 00:16:00.539 fused_ordering(798) 00:16:00.539 fused_ordering(799) 00:16:00.539 fused_ordering(800) 00:16:00.539 fused_ordering(801) 00:16:00.539 fused_ordering(802) 00:16:00.539 fused_ordering(803) 00:16:00.539 fused_ordering(804) 00:16:00.539 fused_ordering(805) 00:16:00.539 fused_ordering(806) 00:16:00.539 fused_ordering(807) 00:16:00.539 fused_ordering(808) 00:16:00.539 fused_ordering(809) 00:16:00.539 fused_ordering(810) 00:16:00.539 fused_ordering(811) 00:16:00.539 fused_ordering(812) 00:16:00.539 fused_ordering(813) 00:16:00.539 fused_ordering(814) 00:16:00.539 fused_ordering(815) 00:16:00.539 fused_ordering(816) 00:16:00.539 fused_ordering(817) 00:16:00.539 fused_ordering(818) 00:16:00.539 fused_ordering(819) 00:16:00.539 fused_ordering(820) 00:16:01.112 fused_ordering(821) 00:16:01.113 fused_ordering(822) 00:16:01.113 fused_ordering(823) 00:16:01.113 fused_ordering(824) 00:16:01.113 fused_ordering(825) 00:16:01.113 fused_ordering(826) 00:16:01.113 fused_ordering(827) 00:16:01.113 fused_ordering(828) 00:16:01.113 fused_ordering(829) 00:16:01.113 fused_ordering(830) 00:16:01.113 fused_ordering(831) 00:16:01.113 fused_ordering(832) 00:16:01.113 fused_ordering(833) 00:16:01.113 fused_ordering(834) 00:16:01.113 fused_ordering(835) 00:16:01.113 fused_ordering(836) 00:16:01.113 fused_ordering(837) 00:16:01.113 fused_ordering(838) 00:16:01.113 fused_ordering(839) 00:16:01.113 fused_ordering(840) 00:16:01.113 fused_ordering(841) 00:16:01.113 fused_ordering(842) 00:16:01.113 fused_ordering(843) 00:16:01.113 fused_ordering(844) 00:16:01.113 fused_ordering(845) 00:16:01.113 fused_ordering(846) 00:16:01.113 fused_ordering(847) 00:16:01.113 fused_ordering(848) 00:16:01.113 fused_ordering(849) 00:16:01.113 fused_ordering(850) 00:16:01.113 fused_ordering(851) 00:16:01.113 fused_ordering(852) 00:16:01.113 fused_ordering(853) 00:16:01.113 fused_ordering(854) 00:16:01.113 fused_ordering(855) 00:16:01.113 fused_ordering(856) 00:16:01.113 fused_ordering(857) 00:16:01.113 fused_ordering(858) 00:16:01.113 fused_ordering(859) 00:16:01.113 fused_ordering(860) 00:16:01.113 fused_ordering(861) 00:16:01.113 fused_ordering(862) 00:16:01.113 fused_ordering(863) 00:16:01.113 fused_ordering(864) 00:16:01.113 fused_ordering(865) 00:16:01.113 fused_ordering(866) 00:16:01.113 fused_ordering(867) 00:16:01.113 fused_ordering(868) 00:16:01.113 fused_ordering(869) 00:16:01.113 fused_ordering(870) 00:16:01.113 fused_ordering(871) 00:16:01.113 fused_ordering(872) 00:16:01.113 fused_ordering(873) 00:16:01.113 fused_ordering(874) 00:16:01.113 fused_ordering(875) 00:16:01.113 fused_ordering(876) 00:16:01.113 fused_ordering(877) 00:16:01.113 fused_ordering(878) 00:16:01.113 fused_ordering(879) 00:16:01.113 fused_ordering(880) 00:16:01.113 fused_ordering(881) 00:16:01.113 fused_ordering(882) 00:16:01.113 fused_ordering(883) 00:16:01.113 fused_ordering(884) 00:16:01.113 fused_ordering(885) 00:16:01.113 fused_ordering(886) 00:16:01.113 fused_ordering(887) 00:16:01.113 fused_ordering(888) 00:16:01.113 fused_ordering(889) 00:16:01.113 fused_ordering(890) 00:16:01.113 fused_ordering(891) 00:16:01.113 fused_ordering(892) 00:16:01.113 fused_ordering(893) 00:16:01.113 fused_ordering(894) 00:16:01.113 fused_ordering(895) 00:16:01.113 fused_ordering(896) 00:16:01.113 fused_ordering(897) 00:16:01.113 fused_ordering(898) 00:16:01.113 fused_ordering(899) 00:16:01.113 fused_ordering(900) 00:16:01.113 fused_ordering(901) 00:16:01.113 fused_ordering(902) 00:16:01.113 fused_ordering(903) 00:16:01.113 fused_ordering(904) 00:16:01.113 fused_ordering(905) 00:16:01.113 fused_ordering(906) 00:16:01.113 fused_ordering(907) 00:16:01.113 fused_ordering(908) 00:16:01.113 fused_ordering(909) 00:16:01.113 fused_ordering(910) 00:16:01.113 fused_ordering(911) 00:16:01.113 fused_ordering(912) 00:16:01.113 fused_ordering(913) 00:16:01.113 fused_ordering(914) 00:16:01.113 fused_ordering(915) 00:16:01.113 fused_ordering(916) 00:16:01.113 fused_ordering(917) 00:16:01.113 fused_ordering(918) 00:16:01.113 fused_ordering(919) 00:16:01.113 fused_ordering(920) 00:16:01.113 fused_ordering(921) 00:16:01.113 fused_ordering(922) 00:16:01.113 fused_ordering(923) 00:16:01.113 fused_ordering(924) 00:16:01.113 fused_ordering(925) 00:16:01.113 fused_ordering(926) 00:16:01.113 fused_ordering(927) 00:16:01.113 fused_ordering(928) 00:16:01.113 fused_ordering(929) 00:16:01.113 fused_ordering(930) 00:16:01.113 fused_ordering(931) 00:16:01.113 fused_ordering(932) 00:16:01.113 fused_ordering(933) 00:16:01.113 fused_ordering(934) 00:16:01.113 fused_ordering(935) 00:16:01.113 fused_ordering(936) 00:16:01.113 fused_ordering(937) 00:16:01.113 fused_ordering(938) 00:16:01.113 fused_ordering(939) 00:16:01.113 fused_ordering(940) 00:16:01.113 fused_ordering(941) 00:16:01.113 fused_ordering(942) 00:16:01.113 fused_ordering(943) 00:16:01.113 fused_ordering(944) 00:16:01.113 fused_ordering(945) 00:16:01.113 fused_ordering(946) 00:16:01.113 fused_ordering(947) 00:16:01.113 fused_ordering(948) 00:16:01.113 fused_ordering(949) 00:16:01.113 fused_ordering(950) 00:16:01.113 fused_ordering(951) 00:16:01.113 fused_ordering(952) 00:16:01.113 fused_ordering(953) 00:16:01.113 fused_ordering(954) 00:16:01.113 fused_ordering(955) 00:16:01.113 fused_ordering(956) 00:16:01.113 fused_ordering(957) 00:16:01.113 fused_ordering(958) 00:16:01.113 fused_ordering(959) 00:16:01.113 fused_ordering(960) 00:16:01.113 fused_ordering(961) 00:16:01.113 fused_ordering(962) 00:16:01.113 fused_ordering(963) 00:16:01.113 fused_ordering(964) 00:16:01.113 fused_ordering(965) 00:16:01.113 fused_ordering(966) 00:16:01.113 fused_ordering(967) 00:16:01.113 fused_ordering(968) 00:16:01.113 fused_ordering(969) 00:16:01.113 fused_ordering(970) 00:16:01.113 fused_ordering(971) 00:16:01.113 fused_ordering(972) 00:16:01.113 fused_ordering(973) 00:16:01.113 fused_ordering(974) 00:16:01.113 fused_ordering(975) 00:16:01.113 fused_ordering(976) 00:16:01.113 fused_ordering(977) 00:16:01.113 fused_ordering(978) 00:16:01.113 fused_ordering(979) 00:16:01.113 fused_ordering(980) 00:16:01.113 fused_ordering(981) 00:16:01.113 fused_ordering(982) 00:16:01.113 fused_ordering(983) 00:16:01.113 fused_ordering(984) 00:16:01.113 fused_ordering(985) 00:16:01.113 fused_ordering(986) 00:16:01.113 fused_ordering(987) 00:16:01.113 fused_ordering(988) 00:16:01.113 fused_ordering(989) 00:16:01.113 fused_ordering(990) 00:16:01.113 fused_ordering(991) 00:16:01.113 fused_ordering(992) 00:16:01.113 fused_ordering(993) 00:16:01.113 fused_ordering(994) 00:16:01.113 fused_ordering(995) 00:16:01.113 fused_ordering(996) 00:16:01.113 fused_ordering(997) 00:16:01.113 fused_ordering(998) 00:16:01.113 fused_ordering(999) 00:16:01.113 fused_ordering(1000) 00:16:01.113 fused_ordering(1001) 00:16:01.113 fused_ordering(1002) 00:16:01.113 fused_ordering(1003) 00:16:01.113 fused_ordering(1004) 00:16:01.113 fused_ordering(1005) 00:16:01.113 fused_ordering(1006) 00:16:01.113 fused_ordering(1007) 00:16:01.113 fused_ordering(1008) 00:16:01.113 fused_ordering(1009) 00:16:01.113 fused_ordering(1010) 00:16:01.113 fused_ordering(1011) 00:16:01.113 fused_ordering(1012) 00:16:01.113 fused_ordering(1013) 00:16:01.113 fused_ordering(1014) 00:16:01.113 fused_ordering(1015) 00:16:01.113 fused_ordering(1016) 00:16:01.113 fused_ordering(1017) 00:16:01.113 fused_ordering(1018) 00:16:01.113 fused_ordering(1019) 00:16:01.113 fused_ordering(1020) 00:16:01.113 fused_ordering(1021) 00:16:01.113 fused_ordering(1022) 00:16:01.113 fused_ordering(1023) 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:01.113 rmmod nvme_tcp 00:16:01.113 rmmod nvme_fabrics 00:16:01.113 rmmod nvme_keyring 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 271284 ']' 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 271284 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 271284 ']' 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 271284 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 271284 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 271284' 00:16:01.113 killing process with pid 271284 00:16:01.113 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 271284 00:16:01.114 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 271284 00:16:01.375 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:01.375 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:01.375 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:01.375 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:16:01.375 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:16:01.375 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:01.375 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:16:01.375 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:01.375 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:01.375 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.375 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.375 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.288 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:03.288 00:16:03.288 real 0m13.899s 00:16:03.288 user 0m8.007s 00:16:03.288 sys 0m7.139s 00:16:03.288 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.288 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:03.288 ************************************ 00:16:03.288 END TEST nvmf_fused_ordering 00:16:03.288 ************************************ 00:16:03.548 09:33:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:03.548 09:33:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:03.548 09:33:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.548 09:33:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:03.548 ************************************ 00:16:03.548 START TEST nvmf_ns_masking 00:16:03.548 ************************************ 00:16:03.548 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:03.548 * Looking for test storage... 00:16:03.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:03.548 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:03.548 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:16:03.548 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:03.548 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:03.548 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:03.548 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:03.548 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:03.548 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:16:03.548 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:16:03.548 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:16:03.548 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:16:03.548 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:03.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.549 --rc genhtml_branch_coverage=1 00:16:03.549 --rc genhtml_function_coverage=1 00:16:03.549 --rc genhtml_legend=1 00:16:03.549 --rc geninfo_all_blocks=1 00:16:03.549 --rc geninfo_unexecuted_blocks=1 00:16:03.549 00:16:03.549 ' 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:03.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.549 --rc genhtml_branch_coverage=1 00:16:03.549 --rc genhtml_function_coverage=1 00:16:03.549 --rc genhtml_legend=1 00:16:03.549 --rc geninfo_all_blocks=1 00:16:03.549 --rc geninfo_unexecuted_blocks=1 00:16:03.549 00:16:03.549 ' 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:03.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.549 --rc genhtml_branch_coverage=1 00:16:03.549 --rc genhtml_function_coverage=1 00:16:03.549 --rc genhtml_legend=1 00:16:03.549 --rc geninfo_all_blocks=1 00:16:03.549 --rc geninfo_unexecuted_blocks=1 00:16:03.549 00:16:03.549 ' 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:03.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.549 --rc genhtml_branch_coverage=1 00:16:03.549 --rc genhtml_function_coverage=1 00:16:03.549 --rc genhtml_legend=1 00:16:03.549 --rc geninfo_all_blocks=1 00:16:03.549 --rc geninfo_unexecuted_blocks=1 00:16:03.549 00:16:03.549 ' 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.549 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:03.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e4b8107d-7905-4b11-a7a9-165e678a6907 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=9dd33430-b6e1-45fc-a179-1b78b821e249 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=d19b9b6b-744d-40f5-8ca5-01c7fe50e68d 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:16:03.810 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:10.397 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.397 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:10.398 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:10.398 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:10.398 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:10.398 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:10.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:16:10.659 00:16:10.659 --- 10.0.0.2 ping statistics --- 00:16:10.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.659 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:10.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:16:10.659 00:16:10.659 --- 10.0.0.1 ping statistics --- 00:16:10.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.659 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:10.659 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=276187 00:16:10.660 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 276187 00:16:10.660 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:10.660 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 276187 ']' 00:16:10.660 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.660 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.660 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.660 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.660 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:10.921 [2024-11-19 09:33:57.453220] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:10.921 [2024-11-19 09:33:57.453271] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.921 [2024-11-19 09:33:57.545036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.921 [2024-11-19 09:33:57.580176] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.921 [2024-11-19 09:33:57.580209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.921 [2024-11-19 09:33:57.580218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.921 [2024-11-19 09:33:57.580224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.921 [2024-11-19 09:33:57.580230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.921 [2024-11-19 09:33:57.580810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.494 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.494 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:16:11.494 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:11.494 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:11.494 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:11.755 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.755 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:11.755 [2024-11-19 09:33:58.425950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.755 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:11.755 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:11.755 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:12.016 Malloc1 00:16:12.016 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:12.278 Malloc2 00:16:12.278 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:12.278 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:12.545 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.811 [2024-11-19 09:33:59.308909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.811 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:12.811 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d19b9b6b-744d-40f5-8ca5-01c7fe50e68d -a 10.0.0.2 -s 4420 -i 4 00:16:12.811 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:12.811 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:12.811 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.811 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:12.811 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:15.366 [ 0]:0x1 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07704ee2cc22428eaf3bfc4753293454 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07704ee2cc22428eaf3bfc4753293454 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:15.366 [ 0]:0x1 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07704ee2cc22428eaf3bfc4753293454 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07704ee2cc22428eaf3bfc4753293454 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:15.366 [ 1]:0x2 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dd345abb20fb4247a578b84212d99516 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dd345abb20fb4247a578b84212d99516 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:15.366 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:15.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.366 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:15.627 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:15.891 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:15.891 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d19b9b6b-744d-40f5-8ca5-01c7fe50e68d -a 10.0.0.2 -s 4420 -i 4 00:16:15.891 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:15.891 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:15.891 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.891 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:16:15.891 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:16:15.891 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:18.444 [ 0]:0x2 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dd345abb20fb4247a578b84212d99516 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dd345abb20fb4247a578b84212d99516 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.444 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:18.444 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:18.444 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:18.444 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:18.444 [ 0]:0x1 00:16:18.444 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:18.444 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:18.444 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07704ee2cc22428eaf3bfc4753293454 00:16:18.444 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07704ee2cc22428eaf3bfc4753293454 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.444 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:18.444 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:18.444 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:18.444 [ 1]:0x2 00:16:18.444 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:18.444 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dd345abb20fb4247a578b84212d99516 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dd345abb20fb4247a578b84212d99516 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:18.707 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:18.970 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:18.970 [ 0]:0x2 00:16:18.970 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:18.970 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:18.970 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dd345abb20fb4247a578b84212d99516 00:16:18.970 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dd345abb20fb4247a578b84212d99516 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.970 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:18.970 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:18.970 Failed to open ns nvme0n2, errno 2 00:16:18.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.970 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:19.231 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:19.231 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d19b9b6b-744d-40f5-8ca5-01c7fe50e68d -a 10.0.0.2 -s 4420 -i 4 00:16:19.231 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:19.231 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:19.231 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:19.231 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:19.231 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:19.231 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:21.782 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:21.782 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:21.782 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:21.782 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:21.782 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:21.782 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:21.782 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:21.782 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:21.782 [ 0]:0x1 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07704ee2cc22428eaf3bfc4753293454 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07704ee2cc22428eaf3bfc4753293454 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:21.782 [ 1]:0x2 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dd345abb20fb4247a578b84212d99516 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dd345abb20fb4247a578b84212d99516 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:21.782 [ 0]:0x2 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dd345abb20fb4247a578b84212d99516 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dd345abb20fb4247a578b84212d99516 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:21.782 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:22.045 [2024-11-19 09:34:08.650488] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:22.045 request: 00:16:22.045 { 00:16:22.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.045 "nsid": 2, 00:16:22.045 "host": "nqn.2016-06.io.spdk:host1", 00:16:22.045 "method": "nvmf_ns_remove_host", 00:16:22.045 "req_id": 1 00:16:22.045 } 00:16:22.045 Got JSON-RPC error response 00:16:22.045 response: 00:16:22.045 { 00:16:22.045 "code": -32602, 00:16:22.045 "message": "Invalid parameters" 00:16:22.045 } 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:22.045 [ 0]:0x2 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:22.045 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:22.307 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dd345abb20fb4247a578b84212d99516 00:16:22.307 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dd345abb20fb4247a578b84212d99516 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:22.307 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:22.307 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.307 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=278697 00:16:22.307 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.307 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:22.307 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 278697 /var/tmp/host.sock 00:16:22.307 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 278697 ']' 00:16:22.307 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:22.307 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.307 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:22.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:22.307 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.307 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:22.307 [2024-11-19 09:34:08.906118] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:22.307 [2024-11-19 09:34:08.906175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278697 ] 00:16:22.307 [2024-11-19 09:34:08.994370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.307 [2024-11-19 09:34:09.029972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.250 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.250 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:16:23.250 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:23.250 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:23.511 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e4b8107d-7905-4b11-a7a9-165e678a6907 00:16:23.511 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:23.511 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E4B8107D79054B11A7A9165E678A6907 -i 00:16:23.772 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 9dd33430-b6e1-45fc-a179-1b78b821e249 00:16:23.772 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:23.772 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 9DD33430B6E145FCA1791B78B821E249 -i 00:16:23.772 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:24.033 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:24.294 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:24.294 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:24.556 nvme0n1 00:16:24.556 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:24.556 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:25.129 nvme1n2 00:16:25.129 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:25.129 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:25.129 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:25.129 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:25.129 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:25.129 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:25.129 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:25.129 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:25.129 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:25.391 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e4b8107d-7905-4b11-a7a9-165e678a6907 == \e\4\b\8\1\0\7\d\-\7\9\0\5\-\4\b\1\1\-\a\7\a\9\-\1\6\5\e\6\7\8\a\6\9\0\7 ]] 00:16:25.391 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:25.391 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:25.391 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:25.654 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 9dd33430-b6e1-45fc-a179-1b78b821e249 == \9\d\d\3\3\4\3\0\-\b\6\e\1\-\4\5\f\c\-\a\1\7\9\-\1\b\7\8\b\8\2\1\e\2\4\9 ]] 00:16:25.654 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:25.654 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:25.915 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid e4b8107d-7905-4b11-a7a9-165e678a6907 00:16:25.915 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:25.915 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E4B8107D79054B11A7A9165E678A6907 00:16:25.915 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:25.915 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E4B8107D79054B11A7A9165E678A6907 00:16:25.915 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:25.915 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.915 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:25.915 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.915 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:25.915 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.915 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:25.915 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:25.915 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E4B8107D79054B11A7A9165E678A6907 00:16:26.176 [2024-11-19 09:34:12.701201] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:16:26.176 [2024-11-19 09:34:12.701234] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:16:26.176 [2024-11-19 09:34:12.701242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.176 request: 00:16:26.176 { 00:16:26.176 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.176 "namespace": { 00:16:26.176 "bdev_name": "invalid", 00:16:26.176 "nsid": 1, 00:16:26.176 "nguid": "E4B8107D79054B11A7A9165E678A6907", 00:16:26.176 "no_auto_visible": false 00:16:26.176 }, 00:16:26.176 "method": "nvmf_subsystem_add_ns", 00:16:26.176 "req_id": 1 00:16:26.176 } 00:16:26.176 Got JSON-RPC error response 00:16:26.176 response: 00:16:26.176 { 00:16:26.176 "code": -32602, 00:16:26.176 "message": "Invalid parameters" 00:16:26.176 } 00:16:26.176 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:26.176 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:26.176 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:26.176 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:26.176 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid e4b8107d-7905-4b11-a7a9-165e678a6907 00:16:26.176 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:26.176 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E4B8107D79054B11A7A9165E678A6907 -i 00:16:26.176 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:16:28.727 09:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:16:28.727 09:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:16:28.727 09:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:28.727 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:16:28.727 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 278697 00:16:28.727 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 278697 ']' 00:16:28.727 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 278697 00:16:28.727 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:28.727 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.727 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278697 00:16:28.727 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:28.727 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:28.727 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278697' 00:16:28.727 killing process with pid 278697 00:16:28.727 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 278697 00:16:28.727 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 278697 00:16:28.727 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:28.988 rmmod nvme_tcp 00:16:28.988 rmmod nvme_fabrics 00:16:28.988 rmmod nvme_keyring 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 276187 ']' 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 276187 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 276187 ']' 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 276187 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 276187 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 276187' 00:16:28.988 killing process with pid 276187 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 276187 00:16:28.988 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 276187 00:16:29.250 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:29.250 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:29.250 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:29.250 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:16:29.250 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:16:29.250 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:29.250 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:16:29.250 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:29.250 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:29.250 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.250 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:29.250 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.165 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:31.165 00:16:31.165 real 0m27.755s 00:16:31.165 user 0m32.061s 00:16:31.165 sys 0m7.797s 00:16:31.165 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.165 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:31.165 ************************************ 00:16:31.165 END TEST nvmf_ns_masking 00:16:31.165 ************************************ 00:16:31.165 09:34:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:31.165 09:34:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:31.165 09:34:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:31.165 09:34:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.165 09:34:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:31.428 ************************************ 00:16:31.428 START TEST nvmf_nvme_cli 00:16:31.428 ************************************ 00:16:31.428 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:31.428 * Looking for test storage... 00:16:31.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:31.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.428 --rc genhtml_branch_coverage=1 00:16:31.428 --rc genhtml_function_coverage=1 00:16:31.428 --rc genhtml_legend=1 00:16:31.428 --rc geninfo_all_blocks=1 00:16:31.428 --rc geninfo_unexecuted_blocks=1 00:16:31.428 00:16:31.428 ' 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:31.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.428 --rc genhtml_branch_coverage=1 00:16:31.428 --rc genhtml_function_coverage=1 00:16:31.428 --rc genhtml_legend=1 00:16:31.428 --rc geninfo_all_blocks=1 00:16:31.428 --rc geninfo_unexecuted_blocks=1 00:16:31.428 00:16:31.428 ' 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:31.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.428 --rc genhtml_branch_coverage=1 00:16:31.428 --rc genhtml_function_coverage=1 00:16:31.428 --rc genhtml_legend=1 00:16:31.428 --rc geninfo_all_blocks=1 00:16:31.428 --rc geninfo_unexecuted_blocks=1 00:16:31.428 00:16:31.428 ' 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:31.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.428 --rc genhtml_branch_coverage=1 00:16:31.428 --rc genhtml_function_coverage=1 00:16:31.428 --rc genhtml_legend=1 00:16:31.428 --rc geninfo_all_blocks=1 00:16:31.428 --rc geninfo_unexecuted_blocks=1 00:16:31.428 00:16:31.428 ' 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.428 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:31.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:16:31.429 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:39.581 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:39.581 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:39.582 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:39.582 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:39.582 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:39.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:16:39.582 00:16:39.582 --- 10.0.0.2 ping statistics --- 00:16:39.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.582 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:39.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:16:39.582 00:16:39.582 --- 10.0.0.1 ping statistics --- 00:16:39.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.582 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=284090 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 284090 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 284090 ']' 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.582 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:39.582 [2024-11-19 09:34:25.661906] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:39.582 [2024-11-19 09:34:25.661971] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.582 [2024-11-19 09:34:25.760584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.582 [2024-11-19 09:34:25.815859] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.582 [2024-11-19 09:34:25.815913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.582 [2024-11-19 09:34:25.815922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.582 [2024-11-19 09:34:25.815929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.582 [2024-11-19 09:34:25.815935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.582 [2024-11-19 09:34:25.818000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.582 [2024-11-19 09:34:25.818155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.582 [2024-11-19 09:34:25.818317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.582 [2024-11-19 09:34:25.818491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.845 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.845 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:16:39.845 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:39.845 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:39.845 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:39.845 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.845 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:39.845 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.845 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:39.845 [2024-11-19 09:34:26.534354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.845 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.845 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:39.845 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.845 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:39.845 Malloc0 00:16:39.845 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.845 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:39.845 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.845 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.107 Malloc1 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.107 [2024-11-19 09:34:26.647915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.107 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:16:40.369 00:16:40.369 Discovery Log Number of Records 2, Generation counter 2 00:16:40.369 =====Discovery Log Entry 0====== 00:16:40.369 trtype: tcp 00:16:40.369 adrfam: ipv4 00:16:40.369 subtype: current discovery subsystem 00:16:40.369 treq: not required 00:16:40.369 portid: 0 00:16:40.369 trsvcid: 4420 00:16:40.369 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:40.369 traddr: 10.0.0.2 00:16:40.369 eflags: explicit discovery connections, duplicate discovery information 00:16:40.369 sectype: none 00:16:40.369 =====Discovery Log Entry 1====== 00:16:40.369 trtype: tcp 00:16:40.369 adrfam: ipv4 00:16:40.369 subtype: nvme subsystem 00:16:40.369 treq: not required 00:16:40.369 portid: 0 00:16:40.369 trsvcid: 4420 00:16:40.369 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:40.369 traddr: 10.0.0.2 00:16:40.369 eflags: none 00:16:40.369 sectype: none 00:16:40.369 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:40.369 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:40.369 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:40.369 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:40.369 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:40.369 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:40.369 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:40.369 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:40.369 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:40.369 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:40.369 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:41.757 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:41.757 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:16:41.757 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:41.757 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:41.757 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:41.757 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:44.305 /dev/nvme0n2 ]] 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:44.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:44.305 rmmod nvme_tcp 00:16:44.305 rmmod nvme_fabrics 00:16:44.305 rmmod nvme_keyring 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 284090 ']' 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 284090 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 284090 ']' 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 284090 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:16:44.305 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 284090 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 284090' 00:16:44.306 killing process with pid 284090 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 284090 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 284090 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.306 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.855 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:46.855 00:16:46.855 real 0m15.070s 00:16:46.855 user 0m22.574s 00:16:46.855 sys 0m6.303s 00:16:46.855 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.855 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.855 ************************************ 00:16:46.855 END TEST nvmf_nvme_cli 00:16:46.855 ************************************ 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:46.855 ************************************ 00:16:46.855 START TEST nvmf_vfio_user 00:16:46.855 ************************************ 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:46.855 * Looking for test storage... 00:16:46.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:46.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.855 --rc genhtml_branch_coverage=1 00:16:46.855 --rc genhtml_function_coverage=1 00:16:46.855 --rc genhtml_legend=1 00:16:46.855 --rc geninfo_all_blocks=1 00:16:46.855 --rc geninfo_unexecuted_blocks=1 00:16:46.855 00:16:46.855 ' 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:46.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.855 --rc genhtml_branch_coverage=1 00:16:46.855 --rc genhtml_function_coverage=1 00:16:46.855 --rc genhtml_legend=1 00:16:46.855 --rc geninfo_all_blocks=1 00:16:46.855 --rc geninfo_unexecuted_blocks=1 00:16:46.855 00:16:46.855 ' 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:46.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.855 --rc genhtml_branch_coverage=1 00:16:46.855 --rc genhtml_function_coverage=1 00:16:46.855 --rc genhtml_legend=1 00:16:46.855 --rc geninfo_all_blocks=1 00:16:46.855 --rc geninfo_unexecuted_blocks=1 00:16:46.855 00:16:46.855 ' 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:46.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.855 --rc genhtml_branch_coverage=1 00:16:46.855 --rc genhtml_function_coverage=1 00:16:46.855 --rc genhtml_legend=1 00:16:46.855 --rc geninfo_all_blocks=1 00:16:46.855 --rc geninfo_unexecuted_blocks=1 00:16:46.855 00:16:46.855 ' 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.855 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:46.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=285844 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 285844' 00:16:46.856 Process pid: 285844 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 285844 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 285844 ']' 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.856 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:46.856 [2024-11-19 09:34:33.384536] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:46.856 [2024-11-19 09:34:33.384608] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.856 [2024-11-19 09:34:33.472042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.856 [2024-11-19 09:34:33.503124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.856 [2024-11-19 09:34:33.503151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.856 [2024-11-19 09:34:33.503161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.856 [2024-11-19 09:34:33.503167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.856 [2024-11-19 09:34:33.503171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.856 [2024-11-19 09:34:33.504632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.856 [2024-11-19 09:34:33.504758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.856 [2024-11-19 09:34:33.504905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.856 [2024-11-19 09:34:33.504907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:47.799 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.799 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:47.799 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:48.744 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:48.744 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:48.744 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:48.744 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:48.744 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:48.744 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:49.005 Malloc1 00:16:49.005 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:49.266 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:49.266 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:49.526 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:49.526 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:49.526 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:49.787 Malloc2 00:16:49.787 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:49.787 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:50.048 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:50.312 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:50.312 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:50.312 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:50.312 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:50.312 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:50.312 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:50.312 [2024-11-19 09:34:36.888942] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:50.312 [2024-11-19 09:34:36.888992] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286566 ] 00:16:50.312 [2024-11-19 09:34:36.929448] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:50.312 [2024-11-19 09:34:36.934766] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:50.312 [2024-11-19 09:34:36.934783] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f318f74e000 00:16:50.312 [2024-11-19 09:34:36.935764] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:50.312 [2024-11-19 09:34:36.936763] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:50.312 [2024-11-19 09:34:36.937764] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:50.312 [2024-11-19 09:34:36.938772] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:50.312 [2024-11-19 09:34:36.939771] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:50.312 [2024-11-19 09:34:36.940784] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:50.312 [2024-11-19 09:34:36.941793] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:50.312 [2024-11-19 09:34:36.942796] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:50.312 [2024-11-19 09:34:36.943805] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:50.312 [2024-11-19 09:34:36.943811] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f318f743000 00:16:50.312 [2024-11-19 09:34:36.944726] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:50.312 [2024-11-19 09:34:36.954176] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:50.312 [2024-11-19 09:34:36.954193] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:16:50.312 [2024-11-19 09:34:36.959887] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:50.312 [2024-11-19 09:34:36.959919] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:50.312 [2024-11-19 09:34:36.959976] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:16:50.312 [2024-11-19 09:34:36.959987] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:16:50.312 [2024-11-19 09:34:36.959991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:16:50.312 [2024-11-19 09:34:36.960889] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:50.312 [2024-11-19 09:34:36.960896] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:16:50.312 [2024-11-19 09:34:36.960902] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:16:50.312 [2024-11-19 09:34:36.961897] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:50.312 [2024-11-19 09:34:36.961903] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:16:50.312 [2024-11-19 09:34:36.961911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:50.312 [2024-11-19 09:34:36.962906] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:50.312 [2024-11-19 09:34:36.962913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:50.312 [2024-11-19 09:34:36.963906] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:50.312 [2024-11-19 09:34:36.963913] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:50.312 [2024-11-19 09:34:36.963916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:50.312 [2024-11-19 09:34:36.963921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:50.312 [2024-11-19 09:34:36.964027] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:16:50.312 [2024-11-19 09:34:36.964030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:50.312 [2024-11-19 09:34:36.964034] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:50.312 [2024-11-19 09:34:36.964912] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:50.312 [2024-11-19 09:34:36.965919] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:50.312 [2024-11-19 09:34:36.966922] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:50.312 [2024-11-19 09:34:36.967929] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:50.312 [2024-11-19 09:34:36.967979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:50.312 [2024-11-19 09:34:36.968940] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:50.312 [2024-11-19 09:34:36.968945] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:50.312 [2024-11-19 09:34:36.968949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:50.312 [2024-11-19 09:34:36.968963] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:16:50.312 [2024-11-19 09:34:36.968971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:50.312 [2024-11-19 09:34:36.968981] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:50.312 [2024-11-19 09:34:36.968984] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:50.313 [2024-11-19 09:34:36.968987] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:50.313 [2024-11-19 09:34:36.968997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:50.313 [2024-11-19 09:34:36.969032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:50.313 [2024-11-19 09:34:36.969040] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:16:50.313 [2024-11-19 09:34:36.969044] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:16:50.313 [2024-11-19 09:34:36.969047] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:16:50.313 [2024-11-19 09:34:36.969050] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:50.313 [2024-11-19 09:34:36.969055] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:16:50.313 [2024-11-19 09:34:36.969058] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:16:50.313 [2024-11-19 09:34:36.969061] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:50.313 [2024-11-19 09:34:36.969087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:50.313 [2024-11-19 09:34:36.969095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.313 [2024-11-19 09:34:36.969101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.313 [2024-11-19 09:34:36.969107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.313 [2024-11-19 09:34:36.969112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.313 [2024-11-19 09:34:36.969116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969127] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:50.313 [2024-11-19 09:34:36.969135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:50.313 [2024-11-19 09:34:36.969140] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:16:50.313 [2024-11-19 09:34:36.969144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969149] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969153] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:50.313 [2024-11-19 09:34:36.969169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:50.313 [2024-11-19 09:34:36.969212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969233] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:50.313 [2024-11-19 09:34:36.969236] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:50.313 [2024-11-19 09:34:36.969239] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:50.313 [2024-11-19 09:34:36.969243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:50.313 [2024-11-19 09:34:36.969257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:50.313 [2024-11-19 09:34:36.969263] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:16:50.313 [2024-11-19 09:34:36.969270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969280] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:50.313 [2024-11-19 09:34:36.969283] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:50.313 [2024-11-19 09:34:36.969285] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:50.313 [2024-11-19 09:34:36.969290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:50.313 [2024-11-19 09:34:36.969304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:50.313 [2024-11-19 09:34:36.969312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969318] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969322] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:50.313 [2024-11-19 09:34:36.969325] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:50.313 [2024-11-19 09:34:36.969328] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:50.313 [2024-11-19 09:34:36.969332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:50.313 [2024-11-19 09:34:36.969343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:50.313 [2024-11-19 09:34:36.969349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969354] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969375] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:50.313 [2024-11-19 09:34:36.969378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:16:50.313 [2024-11-19 09:34:36.969382] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:16:50.313 [2024-11-19 09:34:36.969396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:50.313 [2024-11-19 09:34:36.969407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:50.313 [2024-11-19 09:34:36.969415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:50.313 [2024-11-19 09:34:36.969422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:50.313 [2024-11-19 09:34:36.969430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:50.314 [2024-11-19 09:34:36.969440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:50.314 [2024-11-19 09:34:36.969448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:50.314 [2024-11-19 09:34:36.969454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:50.314 [2024-11-19 09:34:36.969463] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:50.314 [2024-11-19 09:34:36.969466] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:50.314 [2024-11-19 09:34:36.969469] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:50.314 [2024-11-19 09:34:36.969471] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:50.314 [2024-11-19 09:34:36.969474] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:50.314 [2024-11-19 09:34:36.969478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:50.314 [2024-11-19 09:34:36.969484] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:50.314 [2024-11-19 09:34:36.969487] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:50.314 [2024-11-19 09:34:36.969489] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:50.314 [2024-11-19 09:34:36.969493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:50.314 [2024-11-19 09:34:36.969499] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:50.314 [2024-11-19 09:34:36.969501] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:50.314 [2024-11-19 09:34:36.969504] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:50.314 [2024-11-19 09:34:36.969508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:50.314 [2024-11-19 09:34:36.969514] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:50.314 [2024-11-19 09:34:36.969518] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:50.314 [2024-11-19 09:34:36.969520] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:50.314 [2024-11-19 09:34:36.969524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:50.314 [2024-11-19 09:34:36.969529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:50.314 [2024-11-19 09:34:36.969538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:50.314 [2024-11-19 09:34:36.969545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:50.314 [2024-11-19 09:34:36.969550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:50.314 ===================================================== 00:16:50.314 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:50.314 ===================================================== 00:16:50.314 Controller Capabilities/Features 00:16:50.314 ================================ 00:16:50.314 Vendor ID: 4e58 00:16:50.314 Subsystem Vendor ID: 4e58 00:16:50.314 Serial Number: SPDK1 00:16:50.314 Model Number: SPDK bdev Controller 00:16:50.314 Firmware Version: 25.01 00:16:50.314 Recommended Arb Burst: 6 00:16:50.314 IEEE OUI Identifier: 8d 6b 50 00:16:50.314 Multi-path I/O 00:16:50.314 May have multiple subsystem ports: Yes 00:16:50.314 May have multiple controllers: Yes 00:16:50.314 Associated with SR-IOV VF: No 00:16:50.314 Max Data Transfer Size: 131072 00:16:50.314 Max Number of Namespaces: 32 00:16:50.314 Max Number of I/O Queues: 127 00:16:50.314 NVMe Specification Version (VS): 1.3 00:16:50.314 NVMe Specification Version (Identify): 1.3 00:16:50.314 Maximum Queue Entries: 256 00:16:50.314 Contiguous Queues Required: Yes 00:16:50.314 Arbitration Mechanisms Supported 00:16:50.314 Weighted Round Robin: Not Supported 00:16:50.314 Vendor Specific: Not Supported 00:16:50.314 Reset Timeout: 15000 ms 00:16:50.314 Doorbell Stride: 4 bytes 00:16:50.314 NVM Subsystem Reset: Not Supported 00:16:50.314 Command Sets Supported 00:16:50.314 NVM Command Set: Supported 00:16:50.314 Boot Partition: Not Supported 00:16:50.314 Memory Page Size Minimum: 4096 bytes 00:16:50.314 Memory Page Size Maximum: 4096 bytes 00:16:50.314 Persistent Memory Region: Not Supported 00:16:50.314 Optional Asynchronous Events Supported 00:16:50.314 Namespace Attribute Notices: Supported 00:16:50.314 Firmware Activation Notices: Not Supported 00:16:50.314 ANA Change Notices: Not Supported 00:16:50.314 PLE Aggregate Log Change Notices: Not Supported 00:16:50.314 LBA Status Info Alert Notices: Not Supported 00:16:50.314 EGE Aggregate Log Change Notices: Not Supported 00:16:50.314 Normal NVM Subsystem Shutdown event: Not Supported 00:16:50.314 Zone Descriptor Change Notices: Not Supported 00:16:50.314 Discovery Log Change Notices: Not Supported 00:16:50.314 Controller Attributes 00:16:50.314 128-bit Host Identifier: Supported 00:16:50.314 Non-Operational Permissive Mode: Not Supported 00:16:50.314 NVM Sets: Not Supported 00:16:50.314 Read Recovery Levels: Not Supported 00:16:50.314 Endurance Groups: Not Supported 00:16:50.314 Predictable Latency Mode: Not Supported 00:16:50.314 Traffic Based Keep ALive: Not Supported 00:16:50.314 Namespace Granularity: Not Supported 00:16:50.314 SQ Associations: Not Supported 00:16:50.314 UUID List: Not Supported 00:16:50.314 Multi-Domain Subsystem: Not Supported 00:16:50.314 Fixed Capacity Management: Not Supported 00:16:50.314 Variable Capacity Management: Not Supported 00:16:50.314 Delete Endurance Group: Not Supported 00:16:50.314 Delete NVM Set: Not Supported 00:16:50.314 Extended LBA Formats Supported: Not Supported 00:16:50.314 Flexible Data Placement Supported: Not Supported 00:16:50.314 00:16:50.314 Controller Memory Buffer Support 00:16:50.314 ================================ 00:16:50.314 Supported: No 00:16:50.314 00:16:50.314 Persistent Memory Region Support 00:16:50.314 ================================ 00:16:50.314 Supported: No 00:16:50.314 00:16:50.314 Admin Command Set Attributes 00:16:50.314 ============================ 00:16:50.314 Security Send/Receive: Not Supported 00:16:50.314 Format NVM: Not Supported 00:16:50.314 Firmware Activate/Download: Not Supported 00:16:50.314 Namespace Management: Not Supported 00:16:50.314 Device Self-Test: Not Supported 00:16:50.314 Directives: Not Supported 00:16:50.314 NVMe-MI: Not Supported 00:16:50.314 Virtualization Management: Not Supported 00:16:50.314 Doorbell Buffer Config: Not Supported 00:16:50.314 Get LBA Status Capability: Not Supported 00:16:50.314 Command & Feature Lockdown Capability: Not Supported 00:16:50.314 Abort Command Limit: 4 00:16:50.314 Async Event Request Limit: 4 00:16:50.314 Number of Firmware Slots: N/A 00:16:50.314 Firmware Slot 1 Read-Only: N/A 00:16:50.314 Firmware Activation Without Reset: N/A 00:16:50.314 Multiple Update Detection Support: N/A 00:16:50.314 Firmware Update Granularity: No Information Provided 00:16:50.314 Per-Namespace SMART Log: No 00:16:50.314 Asymmetric Namespace Access Log Page: Not Supported 00:16:50.314 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:50.314 Command Effects Log Page: Supported 00:16:50.314 Get Log Page Extended Data: Supported 00:16:50.314 Telemetry Log Pages: Not Supported 00:16:50.314 Persistent Event Log Pages: Not Supported 00:16:50.314 Supported Log Pages Log Page: May Support 00:16:50.314 Commands Supported & Effects Log Page: Not Supported 00:16:50.314 Feature Identifiers & Effects Log Page:May Support 00:16:50.314 NVMe-MI Commands & Effects Log Page: May Support 00:16:50.314 Data Area 4 for Telemetry Log: Not Supported 00:16:50.314 Error Log Page Entries Supported: 128 00:16:50.314 Keep Alive: Supported 00:16:50.314 Keep Alive Granularity: 10000 ms 00:16:50.314 00:16:50.314 NVM Command Set Attributes 00:16:50.314 ========================== 00:16:50.314 Submission Queue Entry Size 00:16:50.314 Max: 64 00:16:50.314 Min: 64 00:16:50.314 Completion Queue Entry Size 00:16:50.314 Max: 16 00:16:50.314 Min: 16 00:16:50.314 Number of Namespaces: 32 00:16:50.314 Compare Command: Supported 00:16:50.314 Write Uncorrectable Command: Not Supported 00:16:50.314 Dataset Management Command: Supported 00:16:50.314 Write Zeroes Command: Supported 00:16:50.314 Set Features Save Field: Not Supported 00:16:50.314 Reservations: Not Supported 00:16:50.314 Timestamp: Not Supported 00:16:50.314 Copy: Supported 00:16:50.314 Volatile Write Cache: Present 00:16:50.314 Atomic Write Unit (Normal): 1 00:16:50.314 Atomic Write Unit (PFail): 1 00:16:50.314 Atomic Compare & Write Unit: 1 00:16:50.314 Fused Compare & Write: Supported 00:16:50.314 Scatter-Gather List 00:16:50.314 SGL Command Set: Supported (Dword aligned) 00:16:50.314 SGL Keyed: Not Supported 00:16:50.314 SGL Bit Bucket Descriptor: Not Supported 00:16:50.314 SGL Metadata Pointer: Not Supported 00:16:50.314 Oversized SGL: Not Supported 00:16:50.314 SGL Metadata Address: Not Supported 00:16:50.315 SGL Offset: Not Supported 00:16:50.315 Transport SGL Data Block: Not Supported 00:16:50.315 Replay Protected Memory Block: Not Supported 00:16:50.315 00:16:50.315 Firmware Slot Information 00:16:50.315 ========================= 00:16:50.315 Active slot: 1 00:16:50.315 Slot 1 Firmware Revision: 25.01 00:16:50.315 00:16:50.315 00:16:50.315 Commands Supported and Effects 00:16:50.315 ============================== 00:16:50.315 Admin Commands 00:16:50.315 -------------- 00:16:50.315 Get Log Page (02h): Supported 00:16:50.315 Identify (06h): Supported 00:16:50.315 Abort (08h): Supported 00:16:50.315 Set Features (09h): Supported 00:16:50.315 Get Features (0Ah): Supported 00:16:50.315 Asynchronous Event Request (0Ch): Supported 00:16:50.315 Keep Alive (18h): Supported 00:16:50.315 I/O Commands 00:16:50.315 ------------ 00:16:50.315 Flush (00h): Supported LBA-Change 00:16:50.315 Write (01h): Supported LBA-Change 00:16:50.315 Read (02h): Supported 00:16:50.315 Compare (05h): Supported 00:16:50.315 Write Zeroes (08h): Supported LBA-Change 00:16:50.315 Dataset Management (09h): Supported LBA-Change 00:16:50.315 Copy (19h): Supported LBA-Change 00:16:50.315 00:16:50.315 Error Log 00:16:50.315 ========= 00:16:50.315 00:16:50.315 Arbitration 00:16:50.315 =========== 00:16:50.315 Arbitration Burst: 1 00:16:50.315 00:16:50.315 Power Management 00:16:50.315 ================ 00:16:50.315 Number of Power States: 1 00:16:50.315 Current Power State: Power State #0 00:16:50.315 Power State #0: 00:16:50.315 Max Power: 0.00 W 00:16:50.315 Non-Operational State: Operational 00:16:50.315 Entry Latency: Not Reported 00:16:50.315 Exit Latency: Not Reported 00:16:50.315 Relative Read Throughput: 0 00:16:50.315 Relative Read Latency: 0 00:16:50.315 Relative Write Throughput: 0 00:16:50.315 Relative Write Latency: 0 00:16:50.315 Idle Power: Not Reported 00:16:50.315 Active Power: Not Reported 00:16:50.315 Non-Operational Permissive Mode: Not Supported 00:16:50.315 00:16:50.315 Health Information 00:16:50.315 ================== 00:16:50.315 Critical Warnings: 00:16:50.315 Available Spare Space: OK 00:16:50.315 Temperature: OK 00:16:50.315 Device Reliability: OK 00:16:50.315 Read Only: No 00:16:50.315 Volatile Memory Backup: OK 00:16:50.315 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:50.315 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:50.315 Available Spare: 0% 00:16:50.315 Available Sp[2024-11-19 09:34:36.969622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:50.315 [2024-11-19 09:34:36.969632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:50.315 [2024-11-19 09:34:36.969651] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:16:50.315 [2024-11-19 09:34:36.969658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.315 [2024-11-19 09:34:36.969663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.315 [2024-11-19 09:34:36.969667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.315 [2024-11-19 09:34:36.969671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.315 [2024-11-19 09:34:36.969947] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:50.315 [2024-11-19 09:34:36.969954] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:50.315 [2024-11-19 09:34:36.970953] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:50.315 [2024-11-19 09:34:36.970994] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:16:50.315 [2024-11-19 09:34:36.970999] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:16:50.315 [2024-11-19 09:34:36.971963] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:50.315 [2024-11-19 09:34:36.971971] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:16:50.315 [2024-11-19 09:34:36.972022] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:50.315 [2024-11-19 09:34:36.974165] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:50.315 are Threshold: 0% 00:16:50.315 Life Percentage Used: 0% 00:16:50.315 Data Units Read: 0 00:16:50.315 Data Units Written: 0 00:16:50.315 Host Read Commands: 0 00:16:50.315 Host Write Commands: 0 00:16:50.315 Controller Busy Time: 0 minutes 00:16:50.315 Power Cycles: 0 00:16:50.315 Power On Hours: 0 hours 00:16:50.315 Unsafe Shutdowns: 0 00:16:50.315 Unrecoverable Media Errors: 0 00:16:50.315 Lifetime Error Log Entries: 0 00:16:50.315 Warning Temperature Time: 0 minutes 00:16:50.315 Critical Temperature Time: 0 minutes 00:16:50.315 00:16:50.315 Number of Queues 00:16:50.315 ================ 00:16:50.315 Number of I/O Submission Queues: 127 00:16:50.315 Number of I/O Completion Queues: 127 00:16:50.315 00:16:50.315 Active Namespaces 00:16:50.315 ================= 00:16:50.315 Namespace ID:1 00:16:50.315 Error Recovery Timeout: Unlimited 00:16:50.315 Command Set Identifier: NVM (00h) 00:16:50.315 Deallocate: Supported 00:16:50.315 Deallocated/Unwritten Error: Not Supported 00:16:50.315 Deallocated Read Value: Unknown 00:16:50.315 Deallocate in Write Zeroes: Not Supported 00:16:50.315 Deallocated Guard Field: 0xFFFF 00:16:50.315 Flush: Supported 00:16:50.315 Reservation: Supported 00:16:50.315 Namespace Sharing Capabilities: Multiple Controllers 00:16:50.315 Size (in LBAs): 131072 (0GiB) 00:16:50.315 Capacity (in LBAs): 131072 (0GiB) 00:16:50.315 Utilization (in LBAs): 131072 (0GiB) 00:16:50.315 NGUID: 7341D8E4FD2341FE8D5105B62FBBE951 00:16:50.315 UUID: 7341d8e4-fd23-41fe-8d51-05b62fbbe951 00:16:50.315 Thin Provisioning: Not Supported 00:16:50.315 Per-NS Atomic Units: Yes 00:16:50.315 Atomic Boundary Size (Normal): 0 00:16:50.315 Atomic Boundary Size (PFail): 0 00:16:50.315 Atomic Boundary Offset: 0 00:16:50.315 Maximum Single Source Range Length: 65535 00:16:50.315 Maximum Copy Length: 65535 00:16:50.315 Maximum Source Range Count: 1 00:16:50.315 NGUID/EUI64 Never Reused: No 00:16:50.315 Namespace Write Protected: No 00:16:50.315 Number of LBA Formats: 1 00:16:50.315 Current LBA Format: LBA Format #00 00:16:50.315 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:50.315 00:16:50.315 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:50.577 [2024-11-19 09:34:37.163849] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:55.865 Initializing NVMe Controllers 00:16:55.865 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:55.865 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:55.865 Initialization complete. Launching workers. 00:16:55.865 ======================================================== 00:16:55.865 Latency(us) 00:16:55.865 Device Information : IOPS MiB/s Average min max 00:16:55.865 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39971.86 156.14 3202.11 848.25 9775.57 00:16:55.865 ======================================================== 00:16:55.865 Total : 39971.86 156.14 3202.11 848.25 9775.57 00:16:55.865 00:16:55.865 [2024-11-19 09:34:42.181672] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:55.865 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:55.865 [2024-11-19 09:34:42.374531] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:01.153 Initializing NVMe Controllers 00:17:01.153 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:01.153 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:01.153 Initialization complete. Launching workers. 00:17:01.153 ======================================================== 00:17:01.153 Latency(us) 00:17:01.153 Device Information : IOPS MiB/s Average min max 00:17:01.153 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7996.41 4989.76 14515.35 00:17:01.153 ======================================================== 00:17:01.153 Total : 16025.60 62.60 7996.41 4989.76 14515.35 00:17:01.153 00:17:01.153 [2024-11-19 09:34:47.410410] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:01.153 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:01.153 [2024-11-19 09:34:47.613297] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:06.442 [2024-11-19 09:34:52.674338] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:06.442 Initializing NVMe Controllers 00:17:06.442 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:06.442 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:06.442 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:06.442 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:06.442 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:06.442 Initialization complete. Launching workers. 00:17:06.442 Starting thread on core 2 00:17:06.442 Starting thread on core 3 00:17:06.442 Starting thread on core 1 00:17:06.442 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:06.442 [2024-11-19 09:34:52.920458] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:09.744 [2024-11-19 09:34:56.123311] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:09.744 Initializing NVMe Controllers 00:17:09.744 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:09.744 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:09.744 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:09.744 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:09.744 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:09.744 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:09.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:09.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:09.744 Initialization complete. Launching workers. 00:17:09.744 Starting thread on core 1 with urgent priority queue 00:17:09.744 Starting thread on core 2 with urgent priority queue 00:17:09.744 Starting thread on core 3 with urgent priority queue 00:17:09.744 Starting thread on core 0 with urgent priority queue 00:17:09.744 SPDK bdev Controller (SPDK1 ) core 0: 9542.67 IO/s 10.48 secs/100000 ios 00:17:09.744 SPDK bdev Controller (SPDK1 ) core 1: 12137.00 IO/s 8.24 secs/100000 ios 00:17:09.744 SPDK bdev Controller (SPDK1 ) core 2: 8686.00 IO/s 11.51 secs/100000 ios 00:17:09.744 SPDK bdev Controller (SPDK1 ) core 3: 10269.67 IO/s 9.74 secs/100000 ios 00:17:09.744 ======================================================== 00:17:09.744 00:17:09.744 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:09.744 [2024-11-19 09:34:56.364641] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:09.744 Initializing NVMe Controllers 00:17:09.744 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:09.744 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:09.744 Namespace ID: 1 size: 0GB 00:17:09.744 Initialization complete. 00:17:09.744 INFO: using host memory buffer for IO 00:17:09.744 Hello world! 00:17:09.744 [2024-11-19 09:34:56.398865] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:09.744 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:10.005 [2024-11-19 09:34:56.634514] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:10.945 Initializing NVMe Controllers 00:17:10.945 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:10.945 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:10.945 Initialization complete. Launching workers. 00:17:10.945 submit (in ns) avg, min, max = 5144.6, 2820.8, 3998289.2 00:17:10.945 complete (in ns) avg, min, max = 17970.4, 1648.3, 4027628.3 00:17:10.945 00:17:10.945 Submit histogram 00:17:10.945 ================ 00:17:10.945 Range in us Cumulative Count 00:17:10.945 2.813 - 2.827: 0.1734% ( 35) 00:17:10.945 2.827 - 2.840: 0.9262% ( 152) 00:17:10.945 2.840 - 2.853: 3.0362% ( 426) 00:17:10.945 2.853 - 2.867: 7.2115% ( 843) 00:17:10.945 2.867 - 2.880: 12.4170% ( 1051) 00:17:10.945 2.880 - 2.893: 18.7717% ( 1283) 00:17:10.946 2.893 - 2.907: 24.9926% ( 1256) 00:17:10.946 2.907 - 2.920: 29.8861% ( 988) 00:17:10.946 2.920 - 2.933: 36.1714% ( 1269) 00:17:10.946 2.933 - 2.947: 42.0258% ( 1182) 00:17:10.946 2.947 - 2.960: 47.0976% ( 1024) 00:17:10.946 2.960 - 2.973: 52.5012% ( 1091) 00:17:10.946 2.973 - 2.987: 60.4507% ( 1605) 00:17:10.946 2.987 - 3.000: 69.3512% ( 1797) 00:17:10.946 3.000 - 3.013: 78.2912% ( 1805) 00:17:10.946 3.013 - 3.027: 85.2749% ( 1410) 00:17:10.946 3.027 - 3.040: 91.4562% ( 1248) 00:17:10.946 3.040 - 3.053: 95.2650% ( 769) 00:17:10.946 3.053 - 3.067: 97.2214% ( 395) 00:17:10.946 3.067 - 3.080: 98.3606% ( 230) 00:17:10.946 3.080 - 3.093: 98.9004% ( 109) 00:17:10.946 3.093 - 3.107: 99.2372% ( 68) 00:17:10.946 3.107 - 3.120: 99.3908% ( 31) 00:17:10.946 3.120 - 3.133: 99.4799% ( 18) 00:17:10.946 3.133 - 3.147: 99.5146% ( 7) 00:17:10.946 3.147 - 3.160: 99.5344% ( 4) 00:17:10.946 3.160 - 3.173: 99.5394% ( 1) 00:17:10.946 3.200 - 3.213: 99.5542% ( 3) 00:17:10.946 3.240 - 3.253: 99.5592% ( 1) 00:17:10.946 3.600 - 3.627: 99.5691% ( 2) 00:17:10.946 3.680 - 3.707: 99.5740% ( 1) 00:17:10.946 3.760 - 3.787: 99.5790% ( 1) 00:17:10.946 3.840 - 3.867: 99.5840% ( 1) 00:17:10.946 3.867 - 3.893: 99.5889% ( 1) 00:17:10.946 3.947 - 3.973: 99.5939% ( 1) 00:17:10.946 4.000 - 4.027: 99.5988% ( 1) 00:17:10.946 4.027 - 4.053: 99.6087% ( 2) 00:17:10.946 4.160 - 4.187: 99.6186% ( 2) 00:17:10.946 4.187 - 4.213: 99.6236% ( 1) 00:17:10.946 4.240 - 4.267: 99.6285% ( 1) 00:17:10.946 4.400 - 4.427: 99.6335% ( 1) 00:17:10.946 4.453 - 4.480: 99.6384% ( 1) 00:17:10.946 4.507 - 4.533: 99.6434% ( 1) 00:17:10.946 4.560 - 4.587: 99.6483% ( 1) 00:17:10.946 4.640 - 4.667: 99.6533% ( 1) 00:17:10.946 4.693 - 4.720: 99.6582% ( 1) 00:17:10.946 4.720 - 4.747: 99.6632% ( 1) 00:17:10.946 4.747 - 4.773: 99.6682% ( 1) 00:17:10.946 4.773 - 4.800: 99.6731% ( 1) 00:17:10.946 4.827 - 4.853: 99.6781% ( 1) 00:17:10.946 4.853 - 4.880: 99.6830% ( 1) 00:17:10.946 4.907 - 4.933: 99.6880% ( 1) 00:17:10.946 4.933 - 4.960: 99.6929% ( 1) 00:17:10.946 4.987 - 5.013: 99.7028% ( 2) 00:17:10.946 5.013 - 5.040: 99.7078% ( 1) 00:17:10.946 5.040 - 5.067: 99.7276% ( 4) 00:17:10.946 5.067 - 5.093: 99.7325% ( 1) 00:17:10.946 5.200 - 5.227: 99.7375% ( 1) 00:17:10.946 5.253 - 5.280: 99.7424% ( 1) 00:17:10.946 5.307 - 5.333: 99.7524% ( 2) 00:17:10.946 5.360 - 5.387: 99.7573% ( 1) 00:17:10.946 5.387 - 5.413: 99.7672% ( 2) 00:17:10.946 5.413 - 5.440: 99.7771% ( 2) 00:17:10.946 5.440 - 5.467: 99.7821% ( 1) 00:17:10.946 5.467 - 5.493: 99.7870% ( 1) 00:17:10.946 5.493 - 5.520: 99.7920% ( 1) 00:17:10.946 5.573 - 5.600: 99.8068% ( 3) 00:17:10.946 5.627 - 5.653: 99.8118% ( 1) 00:17:10.946 5.653 - 5.680: 99.8167% ( 1) 00:17:10.946 5.680 - 5.707: 99.8217% ( 1) 00:17:10.946 5.733 - 5.760: 99.8266% ( 1) 00:17:10.946 5.787 - 5.813: 99.8316% ( 1) 00:17:10.946 5.840 - 5.867: 99.8415% ( 2) 00:17:10.946 5.867 - 5.893: 99.8465% ( 1) 00:17:10.946 5.893 - 5.920: 99.8514% ( 1) 00:17:10.946 5.920 - 5.947: 99.8564% ( 1) 00:17:10.946 5.947 - 5.973: 99.8613% ( 1) 00:17:10.946 6.027 - 6.053: 99.8663% ( 1) 00:17:10.946 6.080 - 6.107: 99.8712% ( 1) 00:17:10.946 [2024-11-19 09:34:57.656167] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:10.946 6.133 - 6.160: 99.8762% ( 1) 00:17:10.946 6.160 - 6.187: 99.8861% ( 2) 00:17:10.946 6.187 - 6.213: 99.8910% ( 1) 00:17:10.946 6.213 - 6.240: 99.8960% ( 1) 00:17:10.946 6.240 - 6.267: 99.9009% ( 1) 00:17:10.946 6.373 - 6.400: 99.9108% ( 2) 00:17:10.946 6.453 - 6.480: 99.9158% ( 1) 00:17:10.946 6.533 - 6.560: 99.9208% ( 1) 00:17:10.946 6.587 - 6.613: 99.9257% ( 1) 00:17:10.946 6.880 - 6.933: 99.9307% ( 1) 00:17:10.946 9.493 - 9.547: 99.9356% ( 1) 00:17:10.946 10.933 - 10.987: 99.9406% ( 1) 00:17:10.946 11.840 - 11.893: 99.9455% ( 1) 00:17:10.946 3986.773 - 4014.080: 100.0000% ( 11) 00:17:10.946 00:17:10.946 Complete histogram 00:17:10.946 ================== 00:17:10.946 Range in us Cumulative Count 00:17:10.946 1.647 - 1.653: 0.0545% ( 11) 00:17:10.946 1.653 - 1.660: 0.6241% ( 115) 00:17:10.946 1.660 - 1.667: 0.6835% ( 12) 00:17:10.946 1.667 - 1.673: 0.7429% ( 12) 00:17:10.946 1.673 - 1.680: 0.8519% ( 22) 00:17:10.946 1.680 - 1.687: 0.9113% ( 12) 00:17:10.946 1.687 - 1.693: 0.9262% ( 3) 00:17:10.946 1.693 - 1.700: 0.9312% ( 1) 00:17:10.946 1.700 - 1.707: 0.9361% ( 1) 00:17:10.946 1.707 - 1.720: 1.5057% ( 115) 00:17:10.946 1.720 - 1.733: 50.9113% ( 9975) 00:17:10.946 1.733 - 1.747: 69.5097% ( 3755) 00:17:10.946 1.747 - 1.760: 79.6533% ( 2048) 00:17:10.946 1.760 - 1.773: 82.6053% ( 596) 00:17:10.946 1.773 - 1.787: 85.1263% ( 509) 00:17:10.946 1.787 - 1.800: 91.4364% ( 1274) 00:17:10.946 1.800 - 1.813: 95.9534% ( 912) 00:17:10.946 1.813 - 1.827: 98.2467% ( 463) 00:17:10.946 1.827 - 1.840: 99.1778% ( 188) 00:17:10.946 1.840 - 1.853: 99.4056% ( 46) 00:17:10.946 1.853 - 1.867: 99.4304% ( 5) 00:17:10.946 1.867 - 1.880: 99.4354% ( 1) 00:17:10.946 1.893 - 1.907: 99.4403% ( 1) 00:17:10.946 3.333 - 3.347: 99.4453% ( 1) 00:17:10.946 3.787 - 3.813: 99.4502% ( 1) 00:17:10.946 4.080 - 4.107: 99.4552% ( 1) 00:17:10.946 4.107 - 4.133: 99.4601% ( 1) 00:17:10.946 4.133 - 4.160: 99.4750% ( 3) 00:17:10.946 4.267 - 4.293: 99.4799% ( 1) 00:17:10.946 4.320 - 4.347: 99.4849% ( 1) 00:17:10.946 4.373 - 4.400: 99.4898% ( 1) 00:17:10.946 4.427 - 4.453: 99.4948% ( 1) 00:17:10.946 4.480 - 4.507: 99.4998% ( 1) 00:17:10.946 4.560 - 4.587: 99.5097% ( 2) 00:17:10.946 4.640 - 4.667: 99.5196% ( 2) 00:17:10.946 4.667 - 4.693: 99.5245% ( 1) 00:17:10.946 4.800 - 4.827: 99.5295% ( 1) 00:17:10.946 4.880 - 4.907: 99.5394% ( 2) 00:17:10.946 4.907 - 4.933: 99.5542% ( 3) 00:17:10.946 4.933 - 4.960: 99.5592% ( 1) 00:17:10.946 4.960 - 4.987: 99.5641% ( 1) 00:17:10.946 5.147 - 5.173: 99.5691% ( 1) 00:17:10.946 5.280 - 5.307: 99.5740% ( 1) 00:17:10.946 5.360 - 5.387: 99.5790% ( 1) 00:17:10.946 5.493 - 5.520: 99.5840% ( 1) 00:17:10.946 9.760 - 9.813: 99.5889% ( 1) 00:17:10.946 10.987 - 11.040: 99.5939% ( 1) 00:17:10.946 3904.853 - 3932.160: 99.5988% ( 1) 00:17:10.946 3986.773 - 4014.080: 99.9950% ( 80) 00:17:10.946 4014.080 - 4041.387: 100.0000% ( 1) 00:17:10.946 00:17:11.207 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:11.207 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:11.207 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:11.207 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:11.207 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:11.207 [ 00:17:11.207 { 00:17:11.207 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:11.207 "subtype": "Discovery", 00:17:11.207 "listen_addresses": [], 00:17:11.207 "allow_any_host": true, 00:17:11.207 "hosts": [] 00:17:11.207 }, 00:17:11.207 { 00:17:11.207 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:11.207 "subtype": "NVMe", 00:17:11.207 "listen_addresses": [ 00:17:11.207 { 00:17:11.207 "trtype": "VFIOUSER", 00:17:11.207 "adrfam": "IPv4", 00:17:11.207 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:11.207 "trsvcid": "0" 00:17:11.207 } 00:17:11.207 ], 00:17:11.207 "allow_any_host": true, 00:17:11.207 "hosts": [], 00:17:11.207 "serial_number": "SPDK1", 00:17:11.208 "model_number": "SPDK bdev Controller", 00:17:11.208 "max_namespaces": 32, 00:17:11.208 "min_cntlid": 1, 00:17:11.208 "max_cntlid": 65519, 00:17:11.208 "namespaces": [ 00:17:11.208 { 00:17:11.208 "nsid": 1, 00:17:11.208 "bdev_name": "Malloc1", 00:17:11.208 "name": "Malloc1", 00:17:11.208 "nguid": "7341D8E4FD2341FE8D5105B62FBBE951", 00:17:11.208 "uuid": "7341d8e4-fd23-41fe-8d51-05b62fbbe951" 00:17:11.208 } 00:17:11.208 ] 00:17:11.208 }, 00:17:11.208 { 00:17:11.208 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:11.208 "subtype": "NVMe", 00:17:11.208 "listen_addresses": [ 00:17:11.208 { 00:17:11.208 "trtype": "VFIOUSER", 00:17:11.208 "adrfam": "IPv4", 00:17:11.208 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:11.208 "trsvcid": "0" 00:17:11.208 } 00:17:11.208 ], 00:17:11.208 "allow_any_host": true, 00:17:11.208 "hosts": [], 00:17:11.208 "serial_number": "SPDK2", 00:17:11.208 "model_number": "SPDK bdev Controller", 00:17:11.208 "max_namespaces": 32, 00:17:11.208 "min_cntlid": 1, 00:17:11.208 "max_cntlid": 65519, 00:17:11.208 "namespaces": [ 00:17:11.208 { 00:17:11.208 "nsid": 1, 00:17:11.208 "bdev_name": "Malloc2", 00:17:11.208 "name": "Malloc2", 00:17:11.208 "nguid": "574122EDB2F04733803FF992A4E140A5", 00:17:11.208 "uuid": "574122ed-b2f0-4733-803f-f992a4e140a5" 00:17:11.208 } 00:17:11.208 ] 00:17:11.208 } 00:17:11.208 ] 00:17:11.208 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:11.208 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:11.208 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=290625 00:17:11.208 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:11.208 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:11.208 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:11.208 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:17:11.208 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:17:11.208 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:11.469 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:11.469 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:17:11.469 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:17:11.469 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:11.469 [2024-11-19 09:34:58.025314] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:11.469 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:11.469 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:11.469 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:11.469 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:11.469 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:11.730 Malloc3 00:17:11.731 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:11.731 [2024-11-19 09:34:58.453347] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:11.992 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:11.992 Asynchronous Event Request test 00:17:11.992 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:11.992 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:11.992 Registering asynchronous event callbacks... 00:17:11.992 Starting namespace attribute notice tests for all controllers... 00:17:11.992 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:11.992 aer_cb - Changed Namespace 00:17:11.992 Cleaning up... 00:17:11.992 [ 00:17:11.992 { 00:17:11.992 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:11.992 "subtype": "Discovery", 00:17:11.992 "listen_addresses": [], 00:17:11.992 "allow_any_host": true, 00:17:11.992 "hosts": [] 00:17:11.992 }, 00:17:11.992 { 00:17:11.992 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:11.992 "subtype": "NVMe", 00:17:11.992 "listen_addresses": [ 00:17:11.992 { 00:17:11.992 "trtype": "VFIOUSER", 00:17:11.992 "adrfam": "IPv4", 00:17:11.992 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:11.992 "trsvcid": "0" 00:17:11.992 } 00:17:11.992 ], 00:17:11.992 "allow_any_host": true, 00:17:11.992 "hosts": [], 00:17:11.992 "serial_number": "SPDK1", 00:17:11.992 "model_number": "SPDK bdev Controller", 00:17:11.992 "max_namespaces": 32, 00:17:11.992 "min_cntlid": 1, 00:17:11.992 "max_cntlid": 65519, 00:17:11.992 "namespaces": [ 00:17:11.992 { 00:17:11.992 "nsid": 1, 00:17:11.992 "bdev_name": "Malloc1", 00:17:11.992 "name": "Malloc1", 00:17:11.992 "nguid": "7341D8E4FD2341FE8D5105B62FBBE951", 00:17:11.992 "uuid": "7341d8e4-fd23-41fe-8d51-05b62fbbe951" 00:17:11.992 }, 00:17:11.992 { 00:17:11.992 "nsid": 2, 00:17:11.992 "bdev_name": "Malloc3", 00:17:11.992 "name": "Malloc3", 00:17:11.992 "nguid": "A23F141393D74282A58B2F58C85745C1", 00:17:11.992 "uuid": "a23f1413-93d7-4282-a58b-2f58c85745c1" 00:17:11.992 } 00:17:11.992 ] 00:17:11.992 }, 00:17:11.992 { 00:17:11.992 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:11.992 "subtype": "NVMe", 00:17:11.992 "listen_addresses": [ 00:17:11.992 { 00:17:11.992 "trtype": "VFIOUSER", 00:17:11.992 "adrfam": "IPv4", 00:17:11.992 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:11.992 "trsvcid": "0" 00:17:11.992 } 00:17:11.992 ], 00:17:11.992 "allow_any_host": true, 00:17:11.992 "hosts": [], 00:17:11.992 "serial_number": "SPDK2", 00:17:11.992 "model_number": "SPDK bdev Controller", 00:17:11.992 "max_namespaces": 32, 00:17:11.992 "min_cntlid": 1, 00:17:11.992 "max_cntlid": 65519, 00:17:11.992 "namespaces": [ 00:17:11.992 { 00:17:11.992 "nsid": 1, 00:17:11.992 "bdev_name": "Malloc2", 00:17:11.992 "name": "Malloc2", 00:17:11.992 "nguid": "574122EDB2F04733803FF992A4E140A5", 00:17:11.992 "uuid": "574122ed-b2f0-4733-803f-f992a4e140a5" 00:17:11.992 } 00:17:11.992 ] 00:17:11.992 } 00:17:11.992 ] 00:17:11.992 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 290625 00:17:11.992 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:11.992 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:11.992 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:11.992 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:11.992 [2024-11-19 09:34:58.683082] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:11.992 [2024-11-19 09:34:58.683126] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290651 ] 00:17:11.992 [2024-11-19 09:34:58.722409] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:11.992 [2024-11-19 09:34:58.731354] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:11.992 [2024-11-19 09:34:58.731372] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7c0c8ae000 00:17:11.992 [2024-11-19 09:34:58.732352] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:11.992 [2024-11-19 09:34:58.733361] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:11.992 [2024-11-19 09:34:58.734371] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:11.992 [2024-11-19 09:34:58.735376] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:12.256 [2024-11-19 09:34:58.736384] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:12.256 [2024-11-19 09:34:58.737394] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:12.256 [2024-11-19 09:34:58.738405] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:12.256 [2024-11-19 09:34:58.739409] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:12.256 [2024-11-19 09:34:58.740420] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:12.256 [2024-11-19 09:34:58.740428] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7c0c8a3000 00:17:12.256 [2024-11-19 09:34:58.741340] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:12.256 [2024-11-19 09:34:58.750715] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:12.256 [2024-11-19 09:34:58.750732] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:17:12.256 [2024-11-19 09:34:58.755794] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:12.256 [2024-11-19 09:34:58.755827] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:12.256 [2024-11-19 09:34:58.755884] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:17:12.256 [2024-11-19 09:34:58.755894] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:17:12.256 [2024-11-19 09:34:58.755897] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:17:12.256 [2024-11-19 09:34:58.756796] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:12.256 [2024-11-19 09:34:58.756804] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:17:12.256 [2024-11-19 09:34:58.756809] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:17:12.256 [2024-11-19 09:34:58.757798] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:12.256 [2024-11-19 09:34:58.757804] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:17:12.256 [2024-11-19 09:34:58.757809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:12.256 [2024-11-19 09:34:58.758809] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:12.256 [2024-11-19 09:34:58.758816] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:12.256 [2024-11-19 09:34:58.759816] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:12.256 [2024-11-19 09:34:58.759823] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:12.256 [2024-11-19 09:34:58.759826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:12.256 [2024-11-19 09:34:58.759831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:12.256 [2024-11-19 09:34:58.759937] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:17:12.256 [2024-11-19 09:34:58.759941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:12.256 [2024-11-19 09:34:58.759944] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:12.256 [2024-11-19 09:34:58.760827] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:12.256 [2024-11-19 09:34:58.761830] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:12.256 [2024-11-19 09:34:58.762835] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:12.256 [2024-11-19 09:34:58.763840] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:12.256 [2024-11-19 09:34:58.763872] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:12.256 [2024-11-19 09:34:58.764849] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:12.256 [2024-11-19 09:34:58.764856] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:12.256 [2024-11-19 09:34:58.764859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:12.256 [2024-11-19 09:34:58.764874] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:17:12.256 [2024-11-19 09:34:58.764882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:12.256 [2024-11-19 09:34:58.764891] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:12.256 [2024-11-19 09:34:58.764895] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:12.256 [2024-11-19 09:34:58.764897] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:12.256 [2024-11-19 09:34:58.764906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:12.256 [2024-11-19 09:34:58.772163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:12.256 [2024-11-19 09:34:58.772172] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:17:12.256 [2024-11-19 09:34:58.772179] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:17:12.256 [2024-11-19 09:34:58.772182] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:17:12.256 [2024-11-19 09:34:58.772185] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:12.256 [2024-11-19 09:34:58.772190] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:17:12.256 [2024-11-19 09:34:58.772194] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:17:12.256 [2024-11-19 09:34:58.772197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:17:12.256 [2024-11-19 09:34:58.772204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:12.256 [2024-11-19 09:34:58.772211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:12.256 [2024-11-19 09:34:58.780163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:12.256 [2024-11-19 09:34:58.780173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.256 [2024-11-19 09:34:58.780179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.256 [2024-11-19 09:34:58.780185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.257 [2024-11-19 09:34:58.780191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.257 [2024-11-19 09:34:58.780194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.780199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.780206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:12.257 [2024-11-19 09:34:58.788163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:12.257 [2024-11-19 09:34:58.788170] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:17:12.257 [2024-11-19 09:34:58.788174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.788179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.788183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.788190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:12.257 [2024-11-19 09:34:58.796163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:12.257 [2024-11-19 09:34:58.796210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.796219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.796224] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:12.257 [2024-11-19 09:34:58.796227] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:12.257 [2024-11-19 09:34:58.796230] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:12.257 [2024-11-19 09:34:58.796235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:12.257 [2024-11-19 09:34:58.804163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:12.257 [2024-11-19 09:34:58.804171] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:17:12.257 [2024-11-19 09:34:58.804178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.804183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.804188] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:12.257 [2024-11-19 09:34:58.804191] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:12.257 [2024-11-19 09:34:58.804194] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:12.257 [2024-11-19 09:34:58.804198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:12.257 [2024-11-19 09:34:58.812162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:12.257 [2024-11-19 09:34:58.812173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.812179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.812185] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:12.257 [2024-11-19 09:34:58.812188] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:12.257 [2024-11-19 09:34:58.812190] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:12.257 [2024-11-19 09:34:58.812194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:12.257 [2024-11-19 09:34:58.820164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:12.257 [2024-11-19 09:34:58.820171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.820176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.820182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.820186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.820189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.820194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.820198] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:12.257 [2024-11-19 09:34:58.820201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:17:12.257 [2024-11-19 09:34:58.820205] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:17:12.257 [2024-11-19 09:34:58.820218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:12.257 [2024-11-19 09:34:58.828162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:12.257 [2024-11-19 09:34:58.828172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:12.257 [2024-11-19 09:34:58.836163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:12.257 [2024-11-19 09:34:58.836173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:12.257 [2024-11-19 09:34:58.844164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:12.257 [2024-11-19 09:34:58.844174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:12.257 [2024-11-19 09:34:58.852163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:12.257 [2024-11-19 09:34:58.852175] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:12.257 [2024-11-19 09:34:58.852179] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:12.257 [2024-11-19 09:34:58.852181] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:12.257 [2024-11-19 09:34:58.852184] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:12.257 [2024-11-19 09:34:58.852186] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:12.257 [2024-11-19 09:34:58.852191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:12.257 [2024-11-19 09:34:58.852196] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:12.257 [2024-11-19 09:34:58.852200] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:12.257 [2024-11-19 09:34:58.852202] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:12.257 [2024-11-19 09:34:58.852206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:12.257 [2024-11-19 09:34:58.852211] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:12.257 [2024-11-19 09:34:58.852214] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:12.257 [2024-11-19 09:34:58.852217] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:12.257 [2024-11-19 09:34:58.852221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:12.257 [2024-11-19 09:34:58.852227] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:12.257 [2024-11-19 09:34:58.852230] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:12.257 [2024-11-19 09:34:58.852234] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:12.257 [2024-11-19 09:34:58.852238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:12.257 [2024-11-19 09:34:58.860164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:12.257 [2024-11-19 09:34:58.860175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:12.257 [2024-11-19 09:34:58.860183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:12.257 [2024-11-19 09:34:58.860188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:12.257 ===================================================== 00:17:12.257 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:12.257 ===================================================== 00:17:12.257 Controller Capabilities/Features 00:17:12.257 ================================ 00:17:12.257 Vendor ID: 4e58 00:17:12.257 Subsystem Vendor ID: 4e58 00:17:12.257 Serial Number: SPDK2 00:17:12.257 Model Number: SPDK bdev Controller 00:17:12.257 Firmware Version: 25.01 00:17:12.257 Recommended Arb Burst: 6 00:17:12.257 IEEE OUI Identifier: 8d 6b 50 00:17:12.257 Multi-path I/O 00:17:12.257 May have multiple subsystem ports: Yes 00:17:12.257 May have multiple controllers: Yes 00:17:12.258 Associated with SR-IOV VF: No 00:17:12.258 Max Data Transfer Size: 131072 00:17:12.258 Max Number of Namespaces: 32 00:17:12.258 Max Number of I/O Queues: 127 00:17:12.258 NVMe Specification Version (VS): 1.3 00:17:12.258 NVMe Specification Version (Identify): 1.3 00:17:12.258 Maximum Queue Entries: 256 00:17:12.258 Contiguous Queues Required: Yes 00:17:12.258 Arbitration Mechanisms Supported 00:17:12.258 Weighted Round Robin: Not Supported 00:17:12.258 Vendor Specific: Not Supported 00:17:12.258 Reset Timeout: 15000 ms 00:17:12.258 Doorbell Stride: 4 bytes 00:17:12.258 NVM Subsystem Reset: Not Supported 00:17:12.258 Command Sets Supported 00:17:12.258 NVM Command Set: Supported 00:17:12.258 Boot Partition: Not Supported 00:17:12.258 Memory Page Size Minimum: 4096 bytes 00:17:12.258 Memory Page Size Maximum: 4096 bytes 00:17:12.258 Persistent Memory Region: Not Supported 00:17:12.258 Optional Asynchronous Events Supported 00:17:12.258 Namespace Attribute Notices: Supported 00:17:12.258 Firmware Activation Notices: Not Supported 00:17:12.258 ANA Change Notices: Not Supported 00:17:12.258 PLE Aggregate Log Change Notices: Not Supported 00:17:12.258 LBA Status Info Alert Notices: Not Supported 00:17:12.258 EGE Aggregate Log Change Notices: Not Supported 00:17:12.258 Normal NVM Subsystem Shutdown event: Not Supported 00:17:12.258 Zone Descriptor Change Notices: Not Supported 00:17:12.258 Discovery Log Change Notices: Not Supported 00:17:12.258 Controller Attributes 00:17:12.258 128-bit Host Identifier: Supported 00:17:12.258 Non-Operational Permissive Mode: Not Supported 00:17:12.258 NVM Sets: Not Supported 00:17:12.258 Read Recovery Levels: Not Supported 00:17:12.258 Endurance Groups: Not Supported 00:17:12.258 Predictable Latency Mode: Not Supported 00:17:12.258 Traffic Based Keep ALive: Not Supported 00:17:12.258 Namespace Granularity: Not Supported 00:17:12.258 SQ Associations: Not Supported 00:17:12.258 UUID List: Not Supported 00:17:12.258 Multi-Domain Subsystem: Not Supported 00:17:12.258 Fixed Capacity Management: Not Supported 00:17:12.258 Variable Capacity Management: Not Supported 00:17:12.258 Delete Endurance Group: Not Supported 00:17:12.258 Delete NVM Set: Not Supported 00:17:12.258 Extended LBA Formats Supported: Not Supported 00:17:12.258 Flexible Data Placement Supported: Not Supported 00:17:12.258 00:17:12.258 Controller Memory Buffer Support 00:17:12.258 ================================ 00:17:12.258 Supported: No 00:17:12.258 00:17:12.258 Persistent Memory Region Support 00:17:12.258 ================================ 00:17:12.258 Supported: No 00:17:12.258 00:17:12.258 Admin Command Set Attributes 00:17:12.258 ============================ 00:17:12.258 Security Send/Receive: Not Supported 00:17:12.258 Format NVM: Not Supported 00:17:12.258 Firmware Activate/Download: Not Supported 00:17:12.258 Namespace Management: Not Supported 00:17:12.258 Device Self-Test: Not Supported 00:17:12.258 Directives: Not Supported 00:17:12.258 NVMe-MI: Not Supported 00:17:12.258 Virtualization Management: Not Supported 00:17:12.258 Doorbell Buffer Config: Not Supported 00:17:12.258 Get LBA Status Capability: Not Supported 00:17:12.258 Command & Feature Lockdown Capability: Not Supported 00:17:12.258 Abort Command Limit: 4 00:17:12.258 Async Event Request Limit: 4 00:17:12.258 Number of Firmware Slots: N/A 00:17:12.258 Firmware Slot 1 Read-Only: N/A 00:17:12.258 Firmware Activation Without Reset: N/A 00:17:12.258 Multiple Update Detection Support: N/A 00:17:12.258 Firmware Update Granularity: No Information Provided 00:17:12.258 Per-Namespace SMART Log: No 00:17:12.258 Asymmetric Namespace Access Log Page: Not Supported 00:17:12.258 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:12.258 Command Effects Log Page: Supported 00:17:12.258 Get Log Page Extended Data: Supported 00:17:12.258 Telemetry Log Pages: Not Supported 00:17:12.258 Persistent Event Log Pages: Not Supported 00:17:12.258 Supported Log Pages Log Page: May Support 00:17:12.258 Commands Supported & Effects Log Page: Not Supported 00:17:12.258 Feature Identifiers & Effects Log Page:May Support 00:17:12.258 NVMe-MI Commands & Effects Log Page: May Support 00:17:12.258 Data Area 4 for Telemetry Log: Not Supported 00:17:12.258 Error Log Page Entries Supported: 128 00:17:12.258 Keep Alive: Supported 00:17:12.258 Keep Alive Granularity: 10000 ms 00:17:12.258 00:17:12.258 NVM Command Set Attributes 00:17:12.258 ========================== 00:17:12.258 Submission Queue Entry Size 00:17:12.258 Max: 64 00:17:12.258 Min: 64 00:17:12.258 Completion Queue Entry Size 00:17:12.258 Max: 16 00:17:12.258 Min: 16 00:17:12.258 Number of Namespaces: 32 00:17:12.258 Compare Command: Supported 00:17:12.258 Write Uncorrectable Command: Not Supported 00:17:12.258 Dataset Management Command: Supported 00:17:12.258 Write Zeroes Command: Supported 00:17:12.258 Set Features Save Field: Not Supported 00:17:12.258 Reservations: Not Supported 00:17:12.258 Timestamp: Not Supported 00:17:12.258 Copy: Supported 00:17:12.258 Volatile Write Cache: Present 00:17:12.258 Atomic Write Unit (Normal): 1 00:17:12.258 Atomic Write Unit (PFail): 1 00:17:12.258 Atomic Compare & Write Unit: 1 00:17:12.258 Fused Compare & Write: Supported 00:17:12.258 Scatter-Gather List 00:17:12.258 SGL Command Set: Supported (Dword aligned) 00:17:12.258 SGL Keyed: Not Supported 00:17:12.258 SGL Bit Bucket Descriptor: Not Supported 00:17:12.258 SGL Metadata Pointer: Not Supported 00:17:12.258 Oversized SGL: Not Supported 00:17:12.258 SGL Metadata Address: Not Supported 00:17:12.258 SGL Offset: Not Supported 00:17:12.258 Transport SGL Data Block: Not Supported 00:17:12.258 Replay Protected Memory Block: Not Supported 00:17:12.258 00:17:12.258 Firmware Slot Information 00:17:12.258 ========================= 00:17:12.258 Active slot: 1 00:17:12.258 Slot 1 Firmware Revision: 25.01 00:17:12.258 00:17:12.258 00:17:12.258 Commands Supported and Effects 00:17:12.258 ============================== 00:17:12.258 Admin Commands 00:17:12.258 -------------- 00:17:12.258 Get Log Page (02h): Supported 00:17:12.258 Identify (06h): Supported 00:17:12.258 Abort (08h): Supported 00:17:12.258 Set Features (09h): Supported 00:17:12.258 Get Features (0Ah): Supported 00:17:12.258 Asynchronous Event Request (0Ch): Supported 00:17:12.258 Keep Alive (18h): Supported 00:17:12.258 I/O Commands 00:17:12.258 ------------ 00:17:12.258 Flush (00h): Supported LBA-Change 00:17:12.258 Write (01h): Supported LBA-Change 00:17:12.258 Read (02h): Supported 00:17:12.258 Compare (05h): Supported 00:17:12.258 Write Zeroes (08h): Supported LBA-Change 00:17:12.258 Dataset Management (09h): Supported LBA-Change 00:17:12.258 Copy (19h): Supported LBA-Change 00:17:12.258 00:17:12.258 Error Log 00:17:12.258 ========= 00:17:12.258 00:17:12.258 Arbitration 00:17:12.258 =========== 00:17:12.258 Arbitration Burst: 1 00:17:12.258 00:17:12.258 Power Management 00:17:12.258 ================ 00:17:12.258 Number of Power States: 1 00:17:12.258 Current Power State: Power State #0 00:17:12.258 Power State #0: 00:17:12.258 Max Power: 0.00 W 00:17:12.258 Non-Operational State: Operational 00:17:12.258 Entry Latency: Not Reported 00:17:12.258 Exit Latency: Not Reported 00:17:12.258 Relative Read Throughput: 0 00:17:12.258 Relative Read Latency: 0 00:17:12.258 Relative Write Throughput: 0 00:17:12.258 Relative Write Latency: 0 00:17:12.258 Idle Power: Not Reported 00:17:12.258 Active Power: Not Reported 00:17:12.258 Non-Operational Permissive Mode: Not Supported 00:17:12.258 00:17:12.258 Health Information 00:17:12.258 ================== 00:17:12.258 Critical Warnings: 00:17:12.258 Available Spare Space: OK 00:17:12.258 Temperature: OK 00:17:12.258 Device Reliability: OK 00:17:12.258 Read Only: No 00:17:12.258 Volatile Memory Backup: OK 00:17:12.258 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:12.258 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:12.258 Available Spare: 0% 00:17:12.258 Available Sp[2024-11-19 09:34:58.860262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:12.258 [2024-11-19 09:34:58.868163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:12.258 [2024-11-19 09:34:58.868188] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:17:12.259 [2024-11-19 09:34:58.868194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.259 [2024-11-19 09:34:58.868199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.259 [2024-11-19 09:34:58.868204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.259 [2024-11-19 09:34:58.868208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.259 [2024-11-19 09:34:58.868246] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:12.259 [2024-11-19 09:34:58.868253] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:12.259 [2024-11-19 09:34:58.869255] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:12.259 [2024-11-19 09:34:58.869294] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:17:12.259 [2024-11-19 09:34:58.869298] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:17:12.259 [2024-11-19 09:34:58.870260] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:12.259 [2024-11-19 09:34:58.870269] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:17:12.259 [2024-11-19 09:34:58.870314] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:12.259 [2024-11-19 09:34:58.871275] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:12.259 are Threshold: 0% 00:17:12.259 Life Percentage Used: 0% 00:17:12.259 Data Units Read: 0 00:17:12.259 Data Units Written: 0 00:17:12.259 Host Read Commands: 0 00:17:12.259 Host Write Commands: 0 00:17:12.259 Controller Busy Time: 0 minutes 00:17:12.259 Power Cycles: 0 00:17:12.259 Power On Hours: 0 hours 00:17:12.259 Unsafe Shutdowns: 0 00:17:12.259 Unrecoverable Media Errors: 0 00:17:12.259 Lifetime Error Log Entries: 0 00:17:12.259 Warning Temperature Time: 0 minutes 00:17:12.259 Critical Temperature Time: 0 minutes 00:17:12.259 00:17:12.259 Number of Queues 00:17:12.259 ================ 00:17:12.259 Number of I/O Submission Queues: 127 00:17:12.259 Number of I/O Completion Queues: 127 00:17:12.259 00:17:12.259 Active Namespaces 00:17:12.259 ================= 00:17:12.259 Namespace ID:1 00:17:12.259 Error Recovery Timeout: Unlimited 00:17:12.259 Command Set Identifier: NVM (00h) 00:17:12.259 Deallocate: Supported 00:17:12.259 Deallocated/Unwritten Error: Not Supported 00:17:12.259 Deallocated Read Value: Unknown 00:17:12.259 Deallocate in Write Zeroes: Not Supported 00:17:12.259 Deallocated Guard Field: 0xFFFF 00:17:12.259 Flush: Supported 00:17:12.259 Reservation: Supported 00:17:12.259 Namespace Sharing Capabilities: Multiple Controllers 00:17:12.259 Size (in LBAs): 131072 (0GiB) 00:17:12.259 Capacity (in LBAs): 131072 (0GiB) 00:17:12.259 Utilization (in LBAs): 131072 (0GiB) 00:17:12.259 NGUID: 574122EDB2F04733803FF992A4E140A5 00:17:12.259 UUID: 574122ed-b2f0-4733-803f-f992a4e140a5 00:17:12.259 Thin Provisioning: Not Supported 00:17:12.259 Per-NS Atomic Units: Yes 00:17:12.259 Atomic Boundary Size (Normal): 0 00:17:12.259 Atomic Boundary Size (PFail): 0 00:17:12.259 Atomic Boundary Offset: 0 00:17:12.259 Maximum Single Source Range Length: 65535 00:17:12.259 Maximum Copy Length: 65535 00:17:12.259 Maximum Source Range Count: 1 00:17:12.259 NGUID/EUI64 Never Reused: No 00:17:12.259 Namespace Write Protected: No 00:17:12.259 Number of LBA Formats: 1 00:17:12.259 Current LBA Format: LBA Format #00 00:17:12.259 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:12.259 00:17:12.259 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:12.520 [2024-11-19 09:34:59.067231] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:17.807 Initializing NVMe Controllers 00:17:17.807 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:17.808 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:17.808 Initialization complete. Launching workers. 00:17:17.808 ======================================================== 00:17:17.808 Latency(us) 00:17:17.808 Device Information : IOPS MiB/s Average min max 00:17:17.808 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40086.21 156.59 3192.95 843.12 6958.88 00:17:17.808 ======================================================== 00:17:17.808 Total : 40086.21 156.59 3192.95 843.12 6958.88 00:17:17.808 00:17:17.808 [2024-11-19 09:35:04.175361] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:17.808 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:17.808 [2024-11-19 09:35:04.363962] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:23.103 Initializing NVMe Controllers 00:17:23.103 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:23.103 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:23.103 Initialization complete. Launching workers. 00:17:23.103 ======================================================== 00:17:23.103 Latency(us) 00:17:23.103 Device Information : IOPS MiB/s Average min max 00:17:23.103 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39983.84 156.19 3201.16 845.57 7779.89 00:17:23.103 ======================================================== 00:17:23.103 Total : 39983.84 156.19 3201.16 845.57 7779.89 00:17:23.103 00:17:23.103 [2024-11-19 09:35:09.381518] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:23.103 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:23.103 [2024-11-19 09:35:09.583720] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:28.387 [2024-11-19 09:35:14.734244] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:28.387 Initializing NVMe Controllers 00:17:28.387 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:28.387 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:28.387 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:28.388 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:28.388 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:28.388 Initialization complete. Launching workers. 00:17:28.388 Starting thread on core 2 00:17:28.388 Starting thread on core 3 00:17:28.388 Starting thread on core 1 00:17:28.388 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:28.388 [2024-11-19 09:35:14.984519] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:31.688 [2024-11-19 09:35:18.144285] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:31.688 Initializing NVMe Controllers 00:17:31.688 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:31.688 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:31.688 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:31.688 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:31.688 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:31.688 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:31.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:31.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:31.688 Initialization complete. Launching workers. 00:17:31.688 Starting thread on core 1 with urgent priority queue 00:17:31.688 Starting thread on core 2 with urgent priority queue 00:17:31.688 Starting thread on core 3 with urgent priority queue 00:17:31.688 Starting thread on core 0 with urgent priority queue 00:17:31.689 SPDK bdev Controller (SPDK2 ) core 0: 8362.00 IO/s 11.96 secs/100000 ios 00:17:31.689 SPDK bdev Controller (SPDK2 ) core 1: 5542.67 IO/s 18.04 secs/100000 ios 00:17:31.689 SPDK bdev Controller (SPDK2 ) core 2: 9246.33 IO/s 10.82 secs/100000 ios 00:17:31.689 SPDK bdev Controller (SPDK2 ) core 3: 7759.00 IO/s 12.89 secs/100000 ios 00:17:31.689 ======================================================== 00:17:31.689 00:17:31.689 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:31.689 [2024-11-19 09:35:18.378092] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:31.689 Initializing NVMe Controllers 00:17:31.689 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:31.689 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:31.689 Namespace ID: 1 size: 0GB 00:17:31.689 Initialization complete. 00:17:31.689 INFO: using host memory buffer for IO 00:17:31.689 Hello world! 00:17:31.689 [2024-11-19 09:35:18.389191] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:31.689 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:31.950 [2024-11-19 09:35:18.625538] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:33.335 Initializing NVMe Controllers 00:17:33.335 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:33.335 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:33.335 Initialization complete. Launching workers. 00:17:33.335 submit (in ns) avg, min, max = 6503.6, 2837.5, 3998435.8 00:17:33.335 complete (in ns) avg, min, max = 15792.0, 1642.5, 4089915.8 00:17:33.335 00:17:33.335 Submit histogram 00:17:33.335 ================ 00:17:33.335 Range in us Cumulative Count 00:17:33.335 2.827 - 2.840: 0.0440% ( 9) 00:17:33.335 2.840 - 2.853: 0.8466% ( 164) 00:17:33.335 2.853 - 2.867: 3.4891% ( 540) 00:17:33.335 2.867 - 2.880: 7.4823% ( 816) 00:17:33.335 2.880 - 2.893: 12.4003% ( 1005) 00:17:33.335 2.893 - 2.907: 17.9398% ( 1132) 00:17:33.335 2.907 - 2.920: 22.9606% ( 1026) 00:17:33.335 2.920 - 2.933: 29.4837% ( 1333) 00:17:33.335 2.933 - 2.947: 35.0722% ( 1142) 00:17:33.335 2.947 - 2.960: 39.9609% ( 999) 00:17:33.335 2.960 - 2.973: 44.5021% ( 928) 00:17:33.335 2.973 - 2.987: 49.3565% ( 992) 00:17:33.335 2.987 - 3.000: 56.5794% ( 1476) 00:17:33.335 3.000 - 3.013: 66.2882% ( 1984) 00:17:33.335 3.013 - 3.027: 75.0379% ( 1788) 00:17:33.335 3.027 - 3.040: 81.8840% ( 1399) 00:17:33.335 3.040 - 3.053: 88.7301% ( 1399) 00:17:33.335 3.053 - 3.067: 93.8292% ( 1042) 00:17:33.335 3.067 - 3.080: 97.4456% ( 739) 00:17:33.335 3.080 - 3.093: 98.7864% ( 274) 00:17:33.335 3.093 - 3.107: 99.2317% ( 91) 00:17:33.335 3.107 - 3.120: 99.4421% ( 43) 00:17:33.335 3.120 - 3.133: 99.5253% ( 17) 00:17:33.335 3.133 - 3.147: 99.5449% ( 4) 00:17:33.335 3.147 - 3.160: 99.5498% ( 1) 00:17:33.335 3.333 - 3.347: 99.5547% ( 1) 00:17:33.335 3.413 - 3.440: 99.5596% ( 1) 00:17:33.335 3.440 - 3.467: 99.5645% ( 1) 00:17:33.335 3.520 - 3.547: 99.5743% ( 2) 00:17:33.335 3.573 - 3.600: 99.5792% ( 1) 00:17:33.335 4.133 - 4.160: 99.5840% ( 1) 00:17:33.335 4.293 - 4.320: 99.5938% ( 2) 00:17:33.335 4.453 - 4.480: 99.5987% ( 1) 00:17:33.335 4.667 - 4.693: 99.6134% ( 3) 00:17:33.335 4.720 - 4.747: 99.6183% ( 1) 00:17:33.335 4.853 - 4.880: 99.6281% ( 2) 00:17:33.335 4.880 - 4.907: 99.6379% ( 2) 00:17:33.335 4.907 - 4.933: 99.6477% ( 2) 00:17:33.335 4.933 - 4.960: 99.6672% ( 4) 00:17:33.335 5.067 - 5.093: 99.6721% ( 1) 00:17:33.335 5.093 - 5.120: 99.6819% ( 2) 00:17:33.335 5.120 - 5.147: 99.6966% ( 3) 00:17:33.335 5.147 - 5.173: 99.7162% ( 4) 00:17:33.335 5.173 - 5.200: 99.7309% ( 3) 00:17:33.335 5.253 - 5.280: 99.7406% ( 2) 00:17:33.335 5.280 - 5.307: 99.7553% ( 3) 00:17:33.335 5.440 - 5.467: 99.7651% ( 2) 00:17:33.335 5.520 - 5.547: 99.7700% ( 1) 00:17:33.335 5.627 - 5.653: 99.7798% ( 2) 00:17:33.335 5.653 - 5.680: 99.7847% ( 1) 00:17:33.335 5.680 - 5.707: 99.7896% ( 1) 00:17:33.335 5.733 - 5.760: 99.7994% ( 2) 00:17:33.335 5.787 - 5.813: 99.8092% ( 2) 00:17:33.335 5.813 - 5.840: 99.8140% ( 1) 00:17:33.335 5.973 - 6.000: 99.8189% ( 1) 00:17:33.335 6.053 - 6.080: 99.8238% ( 1) 00:17:33.335 6.107 - 6.133: 99.8287% ( 1) 00:17:33.335 6.133 - 6.160: 99.8385% ( 2) 00:17:33.335 6.160 - 6.187: 99.8483% ( 2) 00:17:33.335 6.320 - 6.347: 99.8532% ( 1) 00:17:33.335 6.347 - 6.373: 99.8581% ( 1) 00:17:33.335 6.507 - 6.533: 99.8679% ( 2) 00:17:33.335 6.533 - 6.560: 99.8728% ( 1) 00:17:33.335 6.640 - 6.667: 99.8777% ( 1) 00:17:33.335 6.693 - 6.720: 99.8826% ( 1) 00:17:33.335 6.773 - 6.800: 99.8923% ( 2) 00:17:33.335 6.827 - 6.880: 99.8972% ( 1) 00:17:33.335 8.053 - 8.107: 99.9021% ( 1) 00:17:33.335 9.227 - 9.280: 99.9070% ( 1) 00:17:33.335 12.160 - 12.213: 99.9119% ( 1) 00:17:33.335 3986.773 - 4014.080: 100.0000% ( 18) 00:17:33.335 00:17:33.335 Complete histogram 00:17:33.335 ================== 00:17:33.335 Range in us Cumulative Count 00:17:33.335 1.640 - 1.647: 0.2398% ( 49) 00:17:33.336 1.647 - 1.653: 0.7389% ( 102) 00:17:33.336 1.653 - 1.660: 0.7536% ( 3) 00:17:33.336 1.660 - 1.667: 0.8613% ( 22) 00:17:33.336 1.667 - 1.673: 0.9298% ( 14) 00:17:33.336 1.673 - 1.680: 0.9836% ( 11) 00:17:33.336 1.680 - [2024-11-19 09:35:19.719675] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:33.336 1.687: 1.0325% ( 10) 00:17:33.336 1.687 - 1.693: 42.4859% ( 8471) 00:17:33.336 1.693 - 1.700: 55.8943% ( 2740) 00:17:33.336 1.700 - 1.707: 63.7876% ( 1613) 00:17:33.336 1.707 - 1.720: 77.2156% ( 2744) 00:17:33.336 1.720 - 1.733: 82.6327% ( 1107) 00:17:33.336 1.733 - 1.747: 84.1546% ( 311) 00:17:33.336 1.747 - 1.760: 88.3484% ( 857) 00:17:33.336 1.760 - 1.773: 94.4018% ( 1237) 00:17:33.336 1.773 - 1.787: 97.6070% ( 655) 00:17:33.336 1.787 - 1.800: 98.9675% ( 278) 00:17:33.336 1.800 - 1.813: 99.4421% ( 97) 00:17:33.336 1.813 - 1.827: 99.4960% ( 11) 00:17:33.336 1.827 - 1.840: 99.5009% ( 1) 00:17:33.336 3.493 - 3.520: 99.5057% ( 1) 00:17:33.336 3.600 - 3.627: 99.5155% ( 2) 00:17:33.336 3.733 - 3.760: 99.5253% ( 2) 00:17:33.336 3.840 - 3.867: 99.5302% ( 1) 00:17:33.336 3.867 - 3.893: 99.5351% ( 1) 00:17:33.336 3.947 - 3.973: 99.5400% ( 1) 00:17:33.336 4.027 - 4.053: 99.5449% ( 1) 00:17:33.336 4.107 - 4.133: 99.5498% ( 1) 00:17:33.336 4.133 - 4.160: 99.5596% ( 2) 00:17:33.336 4.213 - 4.240: 99.5743% ( 3) 00:17:33.336 4.400 - 4.427: 99.5792% ( 1) 00:17:33.336 4.453 - 4.480: 99.5840% ( 1) 00:17:33.336 4.613 - 4.640: 99.5889% ( 1) 00:17:33.336 4.640 - 4.667: 99.5938% ( 1) 00:17:33.336 4.667 - 4.693: 99.5987% ( 1) 00:17:33.336 4.693 - 4.720: 99.6036% ( 1) 00:17:33.336 4.720 - 4.747: 99.6085% ( 1) 00:17:33.336 4.907 - 4.933: 99.6134% ( 1) 00:17:33.336 4.987 - 5.013: 99.6183% ( 1) 00:17:33.336 5.013 - 5.040: 99.6232% ( 1) 00:17:33.336 5.147 - 5.173: 99.6281% ( 1) 00:17:33.336 5.227 - 5.253: 99.6330% ( 1) 00:17:33.336 5.280 - 5.307: 99.6379% ( 1) 00:17:33.336 8.640 - 8.693: 99.6428% ( 1) 00:17:33.336 30.080 - 30.293: 99.6477% ( 1) 00:17:33.336 3986.773 - 4014.080: 99.9951% ( 71) 00:17:33.336 4068.693 - 4096.000: 100.0000% ( 1) 00:17:33.336 00:17:33.336 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:33.336 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:33.336 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:33.336 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:33.336 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:33.336 [ 00:17:33.336 { 00:17:33.336 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:33.336 "subtype": "Discovery", 00:17:33.336 "listen_addresses": [], 00:17:33.336 "allow_any_host": true, 00:17:33.336 "hosts": [] 00:17:33.336 }, 00:17:33.336 { 00:17:33.336 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:33.336 "subtype": "NVMe", 00:17:33.336 "listen_addresses": [ 00:17:33.336 { 00:17:33.336 "trtype": "VFIOUSER", 00:17:33.336 "adrfam": "IPv4", 00:17:33.336 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:33.336 "trsvcid": "0" 00:17:33.336 } 00:17:33.336 ], 00:17:33.336 "allow_any_host": true, 00:17:33.336 "hosts": [], 00:17:33.336 "serial_number": "SPDK1", 00:17:33.336 "model_number": "SPDK bdev Controller", 00:17:33.336 "max_namespaces": 32, 00:17:33.336 "min_cntlid": 1, 00:17:33.336 "max_cntlid": 65519, 00:17:33.336 "namespaces": [ 00:17:33.336 { 00:17:33.336 "nsid": 1, 00:17:33.336 "bdev_name": "Malloc1", 00:17:33.336 "name": "Malloc1", 00:17:33.336 "nguid": "7341D8E4FD2341FE8D5105B62FBBE951", 00:17:33.336 "uuid": "7341d8e4-fd23-41fe-8d51-05b62fbbe951" 00:17:33.336 }, 00:17:33.336 { 00:17:33.336 "nsid": 2, 00:17:33.336 "bdev_name": "Malloc3", 00:17:33.336 "name": "Malloc3", 00:17:33.336 "nguid": "A23F141393D74282A58B2F58C85745C1", 00:17:33.336 "uuid": "a23f1413-93d7-4282-a58b-2f58c85745c1" 00:17:33.336 } 00:17:33.336 ] 00:17:33.336 }, 00:17:33.336 { 00:17:33.336 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:33.336 "subtype": "NVMe", 00:17:33.336 "listen_addresses": [ 00:17:33.336 { 00:17:33.336 "trtype": "VFIOUSER", 00:17:33.336 "adrfam": "IPv4", 00:17:33.336 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:33.336 "trsvcid": "0" 00:17:33.336 } 00:17:33.336 ], 00:17:33.336 "allow_any_host": true, 00:17:33.336 "hosts": [], 00:17:33.336 "serial_number": "SPDK2", 00:17:33.336 "model_number": "SPDK bdev Controller", 00:17:33.336 "max_namespaces": 32, 00:17:33.336 "min_cntlid": 1, 00:17:33.336 "max_cntlid": 65519, 00:17:33.336 "namespaces": [ 00:17:33.336 { 00:17:33.336 "nsid": 1, 00:17:33.336 "bdev_name": "Malloc2", 00:17:33.336 "name": "Malloc2", 00:17:33.336 "nguid": "574122EDB2F04733803FF992A4E140A5", 00:17:33.336 "uuid": "574122ed-b2f0-4733-803f-f992a4e140a5" 00:17:33.336 } 00:17:33.336 ] 00:17:33.336 } 00:17:33.336 ] 00:17:33.336 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:33.336 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=294972 00:17:33.336 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:33.336 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:33.336 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:33.336 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:33.336 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:17:33.336 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:17:33.336 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:33.336 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:33.336 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:17:33.336 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:17:33.336 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:33.597 [2024-11-19 09:35:20.104612] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:33.597 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:33.597 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:33.597 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:33.597 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:33.597 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:33.597 Malloc4 00:17:33.858 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:33.858 [2024-11-19 09:35:20.508354] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:33.858 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:33.858 Asynchronous Event Request test 00:17:33.858 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:33.858 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:33.858 Registering asynchronous event callbacks... 00:17:33.858 Starting namespace attribute notice tests for all controllers... 00:17:33.858 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:33.858 aer_cb - Changed Namespace 00:17:33.858 Cleaning up... 00:17:34.119 [ 00:17:34.119 { 00:17:34.119 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:34.119 "subtype": "Discovery", 00:17:34.119 "listen_addresses": [], 00:17:34.119 "allow_any_host": true, 00:17:34.119 "hosts": [] 00:17:34.119 }, 00:17:34.119 { 00:17:34.119 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:34.119 "subtype": "NVMe", 00:17:34.119 "listen_addresses": [ 00:17:34.119 { 00:17:34.119 "trtype": "VFIOUSER", 00:17:34.119 "adrfam": "IPv4", 00:17:34.119 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:34.119 "trsvcid": "0" 00:17:34.119 } 00:17:34.119 ], 00:17:34.119 "allow_any_host": true, 00:17:34.119 "hosts": [], 00:17:34.119 "serial_number": "SPDK1", 00:17:34.119 "model_number": "SPDK bdev Controller", 00:17:34.119 "max_namespaces": 32, 00:17:34.119 "min_cntlid": 1, 00:17:34.119 "max_cntlid": 65519, 00:17:34.119 "namespaces": [ 00:17:34.119 { 00:17:34.119 "nsid": 1, 00:17:34.119 "bdev_name": "Malloc1", 00:17:34.119 "name": "Malloc1", 00:17:34.119 "nguid": "7341D8E4FD2341FE8D5105B62FBBE951", 00:17:34.119 "uuid": "7341d8e4-fd23-41fe-8d51-05b62fbbe951" 00:17:34.119 }, 00:17:34.119 { 00:17:34.119 "nsid": 2, 00:17:34.119 "bdev_name": "Malloc3", 00:17:34.119 "name": "Malloc3", 00:17:34.119 "nguid": "A23F141393D74282A58B2F58C85745C1", 00:17:34.120 "uuid": "a23f1413-93d7-4282-a58b-2f58c85745c1" 00:17:34.120 } 00:17:34.120 ] 00:17:34.120 }, 00:17:34.120 { 00:17:34.120 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:34.120 "subtype": "NVMe", 00:17:34.120 "listen_addresses": [ 00:17:34.120 { 00:17:34.120 "trtype": "VFIOUSER", 00:17:34.120 "adrfam": "IPv4", 00:17:34.120 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:34.120 "trsvcid": "0" 00:17:34.120 } 00:17:34.120 ], 00:17:34.120 "allow_any_host": true, 00:17:34.120 "hosts": [], 00:17:34.120 "serial_number": "SPDK2", 00:17:34.120 "model_number": "SPDK bdev Controller", 00:17:34.120 "max_namespaces": 32, 00:17:34.120 "min_cntlid": 1, 00:17:34.120 "max_cntlid": 65519, 00:17:34.120 "namespaces": [ 00:17:34.120 { 00:17:34.120 "nsid": 1, 00:17:34.120 "bdev_name": "Malloc2", 00:17:34.120 "name": "Malloc2", 00:17:34.120 "nguid": "574122EDB2F04733803FF992A4E140A5", 00:17:34.120 "uuid": "574122ed-b2f0-4733-803f-f992a4e140a5" 00:17:34.120 }, 00:17:34.120 { 00:17:34.120 "nsid": 2, 00:17:34.120 "bdev_name": "Malloc4", 00:17:34.120 "name": "Malloc4", 00:17:34.120 "nguid": "83A92C07D28D4D448BEA3F9E7E3CD191", 00:17:34.120 "uuid": "83a92c07-d28d-4d44-8bea-3f9e7e3cd191" 00:17:34.120 } 00:17:34.120 ] 00:17:34.120 } 00:17:34.120 ] 00:17:34.120 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 294972 00:17:34.120 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:34.120 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 285844 00:17:34.120 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 285844 ']' 00:17:34.120 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 285844 00:17:34.120 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:34.120 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.120 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 285844 00:17:34.120 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.120 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.120 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 285844' 00:17:34.120 killing process with pid 285844 00:17:34.120 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 285844 00:17:34.120 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 285844 00:17:34.382 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:34.382 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:34.382 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:34.382 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:34.382 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:34.382 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=295011 00:17:34.382 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 295011' 00:17:34.382 Process pid: 295011 00:17:34.382 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:34.382 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:34.382 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 295011 00:17:34.382 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 295011 ']' 00:17:34.382 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.382 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.382 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.382 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.382 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:34.382 [2024-11-19 09:35:20.982614] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:34.382 [2024-11-19 09:35:20.983515] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:34.382 [2024-11-19 09:35:20.983560] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.382 [2024-11-19 09:35:21.068854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:34.382 [2024-11-19 09:35:21.098546] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.382 [2024-11-19 09:35:21.098576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.382 [2024-11-19 09:35:21.098581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.382 [2024-11-19 09:35:21.098586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.382 [2024-11-19 09:35:21.098590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.382 [2024-11-19 09:35:21.099721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.382 [2024-11-19 09:35:21.099871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.382 [2024-11-19 09:35:21.100004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.382 [2024-11-19 09:35:21.100007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:34.644 [2024-11-19 09:35:21.150229] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:34.644 [2024-11-19 09:35:21.151151] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:34.644 [2024-11-19 09:35:21.152006] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:34.644 [2024-11-19 09:35:21.152475] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:34.644 [2024-11-19 09:35:21.152500] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:35.216 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.216 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:35.216 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:36.158 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:36.419 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:36.419 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:36.419 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:36.419 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:36.420 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:36.681 Malloc1 00:17:36.681 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:36.942 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:36.942 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:37.202 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:37.202 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:37.202 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:37.464 Malloc2 00:17:37.464 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:37.464 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:37.726 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:37.987 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:37.987 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 295011 00:17:37.987 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 295011 ']' 00:17:37.987 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 295011 00:17:37.987 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:37.987 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.987 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 295011 00:17:37.987 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:37.987 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:37.987 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 295011' 00:17:37.987 killing process with pid 295011 00:17:37.987 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 295011 00:17:37.987 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 295011 00:17:38.248 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:38.248 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:38.248 00:17:38.248 real 0m51.665s 00:17:38.248 user 3m17.930s 00:17:38.248 sys 0m2.718s 00:17:38.248 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:38.248 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:38.248 ************************************ 00:17:38.248 END TEST nvmf_vfio_user 00:17:38.248 ************************************ 00:17:38.248 09:35:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:38.248 09:35:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:38.248 09:35:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.248 09:35:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:38.248 ************************************ 00:17:38.248 START TEST nvmf_vfio_user_nvme_compliance 00:17:38.248 ************************************ 00:17:38.248 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:38.248 * Looking for test storage... 00:17:38.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:38.248 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:38.248 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:17:38.248 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:38.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.510 --rc genhtml_branch_coverage=1 00:17:38.510 --rc genhtml_function_coverage=1 00:17:38.510 --rc genhtml_legend=1 00:17:38.510 --rc geninfo_all_blocks=1 00:17:38.510 --rc geninfo_unexecuted_blocks=1 00:17:38.510 00:17:38.510 ' 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:38.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.510 --rc genhtml_branch_coverage=1 00:17:38.510 --rc genhtml_function_coverage=1 00:17:38.510 --rc genhtml_legend=1 00:17:38.510 --rc geninfo_all_blocks=1 00:17:38.510 --rc geninfo_unexecuted_blocks=1 00:17:38.510 00:17:38.510 ' 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:38.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.510 --rc genhtml_branch_coverage=1 00:17:38.510 --rc genhtml_function_coverage=1 00:17:38.510 --rc genhtml_legend=1 00:17:38.510 --rc geninfo_all_blocks=1 00:17:38.510 --rc geninfo_unexecuted_blocks=1 00:17:38.510 00:17:38.510 ' 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:38.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.510 --rc genhtml_branch_coverage=1 00:17:38.510 --rc genhtml_function_coverage=1 00:17:38.510 --rc genhtml_legend=1 00:17:38.510 --rc geninfo_all_blocks=1 00:17:38.510 --rc geninfo_unexecuted_blocks=1 00:17:38.510 00:17:38.510 ' 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.510 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:38.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=295985 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 295985' 00:17:38.511 Process pid: 295985 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 295985 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 295985 ']' 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.511 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:38.511 [2024-11-19 09:35:25.125886] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:38.511 [2024-11-19 09:35:25.125969] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.511 [2024-11-19 09:35:25.212020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:38.511 [2024-11-19 09:35:25.246935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.511 [2024-11-19 09:35:25.246965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.511 [2024-11-19 09:35:25.246972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.511 [2024-11-19 09:35:25.246977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.511 [2024-11-19 09:35:25.246981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.511 [2024-11-19 09:35:25.248912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.511 [2024-11-19 09:35:25.249099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.511 [2024-11-19 09:35:25.249100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.452 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.452 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:17:39.452 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:40.394 malloc0 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.394 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:40.394 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.394 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:40.394 00:17:40.394 00:17:40.394 CUnit - A unit testing framework for C - Version 2.1-3 00:17:40.394 http://cunit.sourceforge.net/ 00:17:40.394 00:17:40.394 00:17:40.394 Suite: nvme_compliance 00:17:40.655 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-19 09:35:27.169583] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:40.655 [2024-11-19 09:35:27.170875] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:40.655 [2024-11-19 09:35:27.170886] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:40.655 [2024-11-19 09:35:27.170891] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:40.655 [2024-11-19 09:35:27.172599] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:40.655 passed 00:17:40.655 Test: admin_identify_ctrlr_verify_fused ...[2024-11-19 09:35:27.251084] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:40.655 [2024-11-19 09:35:27.254106] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:40.655 passed 00:17:40.655 Test: admin_identify_ns ...[2024-11-19 09:35:27.325512] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:40.655 [2024-11-19 09:35:27.389170] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:40.655 [2024-11-19 09:35:27.397168] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:40.915 [2024-11-19 09:35:27.418243] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:40.915 passed 00:17:40.915 Test: admin_get_features_mandatory_features ...[2024-11-19 09:35:27.491448] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:40.916 [2024-11-19 09:35:27.494468] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:40.916 passed 00:17:40.916 Test: admin_get_features_optional_features ...[2024-11-19 09:35:27.573944] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:40.916 [2024-11-19 09:35:27.576960] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:40.916 passed 00:17:40.916 Test: admin_set_features_number_of_queues ...[2024-11-19 09:35:27.650681] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:41.177 [2024-11-19 09:35:27.756249] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:41.177 passed 00:17:41.177 Test: admin_get_log_page_mandatory_logs ...[2024-11-19 09:35:27.830476] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:41.177 [2024-11-19 09:35:27.833497] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:41.177 passed 00:17:41.177 Test: admin_get_log_page_with_lpo ...[2024-11-19 09:35:27.908244] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:41.438 [2024-11-19 09:35:27.978166] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:41.438 [2024-11-19 09:35:27.991204] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:41.438 passed 00:17:41.438 Test: fabric_property_get ...[2024-11-19 09:35:28.064415] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:41.438 [2024-11-19 09:35:28.065624] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:41.438 [2024-11-19 09:35:28.067438] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:41.438 passed 00:17:41.438 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-19 09:35:28.143894] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:41.438 [2024-11-19 09:35:28.145095] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:41.438 [2024-11-19 09:35:28.146904] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:41.438 passed 00:17:41.698 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-19 09:35:28.222647] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:41.698 [2024-11-19 09:35:28.307165] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:41.698 [2024-11-19 09:35:28.323165] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:41.698 [2024-11-19 09:35:28.328243] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:41.698 passed 00:17:41.698 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-19 09:35:28.401438] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:41.698 [2024-11-19 09:35:28.402643] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:41.698 [2024-11-19 09:35:28.404467] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:41.698 passed 00:17:41.960 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-19 09:35:28.479513] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:41.960 [2024-11-19 09:35:28.559171] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:41.960 [2024-11-19 09:35:28.583163] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:41.960 [2024-11-19 09:35:28.588230] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:41.960 passed 00:17:41.960 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-19 09:35:28.662434] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:41.960 [2024-11-19 09:35:28.663637] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:41.960 [2024-11-19 09:35:28.663655] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:41.960 [2024-11-19 09:35:28.665452] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:41.960 passed 00:17:42.220 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-19 09:35:28.740173] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:42.220 [2024-11-19 09:35:28.834164] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:42.220 [2024-11-19 09:35:28.842167] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:42.220 [2024-11-19 09:35:28.850162] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:42.220 [2024-11-19 09:35:28.858192] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:42.220 [2024-11-19 09:35:28.887236] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:42.220 passed 00:17:42.220 Test: admin_create_io_sq_verify_pc ...[2024-11-19 09:35:28.960382] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:42.480 [2024-11-19 09:35:28.977169] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:42.480 [2024-11-19 09:35:28.994554] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:42.480 passed 00:17:42.480 Test: admin_create_io_qp_max_qps ...[2024-11-19 09:35:29.071027] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:43.862 [2024-11-19 09:35:30.183167] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:17:43.862 [2024-11-19 09:35:30.568181] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:43.862 passed 00:17:44.123 Test: admin_create_io_sq_shared_cq ...[2024-11-19 09:35:30.643013] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:44.123 [2024-11-19 09:35:30.775166] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:44.123 [2024-11-19 09:35:30.812207] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:44.123 passed 00:17:44.123 00:17:44.123 Run Summary: Type Total Ran Passed Failed Inactive 00:17:44.123 suites 1 1 n/a 0 0 00:17:44.123 tests 18 18 18 0 0 00:17:44.123 asserts 360 360 360 0 n/a 00:17:44.123 00:17:44.123 Elapsed time = 1.499 seconds 00:17:44.123 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 295985 00:17:44.123 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 295985 ']' 00:17:44.123 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 295985 00:17:44.123 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:17:44.123 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.123 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 295985 00:17:44.383 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:44.383 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:44.383 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 295985' 00:17:44.383 killing process with pid 295985 00:17:44.383 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 295985 00:17:44.383 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 295985 00:17:44.383 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:44.383 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:44.383 00:17:44.383 real 0m6.215s 00:17:44.383 user 0m17.629s 00:17:44.383 sys 0m0.525s 00:17:44.383 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.383 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:44.383 ************************************ 00:17:44.383 END TEST nvmf_vfio_user_nvme_compliance 00:17:44.384 ************************************ 00:17:44.384 09:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:44.384 09:35:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:44.384 09:35:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.384 09:35:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:44.384 ************************************ 00:17:44.384 START TEST nvmf_vfio_user_fuzz 00:17:44.384 ************************************ 00:17:44.384 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:44.645 * Looking for test storage... 00:17:44.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:44.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.645 --rc genhtml_branch_coverage=1 00:17:44.645 --rc genhtml_function_coverage=1 00:17:44.645 --rc genhtml_legend=1 00:17:44.645 --rc geninfo_all_blocks=1 00:17:44.645 --rc geninfo_unexecuted_blocks=1 00:17:44.645 00:17:44.645 ' 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:44.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.645 --rc genhtml_branch_coverage=1 00:17:44.645 --rc genhtml_function_coverage=1 00:17:44.645 --rc genhtml_legend=1 00:17:44.645 --rc geninfo_all_blocks=1 00:17:44.645 --rc geninfo_unexecuted_blocks=1 00:17:44.645 00:17:44.645 ' 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:44.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.645 --rc genhtml_branch_coverage=1 00:17:44.645 --rc genhtml_function_coverage=1 00:17:44.645 --rc genhtml_legend=1 00:17:44.645 --rc geninfo_all_blocks=1 00:17:44.645 --rc geninfo_unexecuted_blocks=1 00:17:44.645 00:17:44.645 ' 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:44.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.645 --rc genhtml_branch_coverage=1 00:17:44.645 --rc genhtml_function_coverage=1 00:17:44.645 --rc genhtml_legend=1 00:17:44.645 --rc geninfo_all_blocks=1 00:17:44.645 --rc geninfo_unexecuted_blocks=1 00:17:44.645 00:17:44.645 ' 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.645 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:44.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=297167 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 297167' 00:17:44.646 Process pid: 297167 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 297167 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 297167 ']' 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.646 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:45.586 09:35:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.586 09:35:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:17:45.586 09:35:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:46.525 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:46.525 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.525 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:46.525 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.525 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:46.525 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:46.525 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.525 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:46.525 malloc0 00:17:46.525 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.525 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:46.525 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.525 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:46.525 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.525 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:46.526 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.526 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:46.526 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.526 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:46.526 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.526 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:46.526 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.526 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:46.526 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:18.671 Fuzzing completed. Shutting down the fuzz application 00:18:18.671 00:18:18.671 Dumping successful admin opcodes: 00:18:18.671 8, 9, 10, 24, 00:18:18.671 Dumping successful io opcodes: 00:18:18.671 0, 00:18:18.671 NS: 0x20000081ef00 I/O qp, Total commands completed: 1247479, total successful commands: 4898, random_seed: 1453697536 00:18:18.671 NS: 0x20000081ef00 admin qp, Total commands completed: 266870, total successful commands: 2150, random_seed: 1065975232 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 297167 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 297167 ']' 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 297167 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 297167 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 297167' 00:18:18.671 killing process with pid 297167 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 297167 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 297167 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:18.671 00:18:18.671 real 0m32.783s 00:18:18.671 user 0m35.274s 00:18:18.671 sys 0m26.311s 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:18.671 ************************************ 00:18:18.671 END TEST nvmf_vfio_user_fuzz 00:18:18.671 ************************************ 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:18.671 ************************************ 00:18:18.671 START TEST nvmf_auth_target 00:18:18.671 ************************************ 00:18:18.671 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:18.671 * Looking for test storage... 00:18:18.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:18.671 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:18.671 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:18.671 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:18.671 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:18.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.672 --rc genhtml_branch_coverage=1 00:18:18.672 --rc genhtml_function_coverage=1 00:18:18.672 --rc genhtml_legend=1 00:18:18.672 --rc geninfo_all_blocks=1 00:18:18.672 --rc geninfo_unexecuted_blocks=1 00:18:18.672 00:18:18.672 ' 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:18.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.672 --rc genhtml_branch_coverage=1 00:18:18.672 --rc genhtml_function_coverage=1 00:18:18.672 --rc genhtml_legend=1 00:18:18.672 --rc geninfo_all_blocks=1 00:18:18.672 --rc geninfo_unexecuted_blocks=1 00:18:18.672 00:18:18.672 ' 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:18.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.672 --rc genhtml_branch_coverage=1 00:18:18.672 --rc genhtml_function_coverage=1 00:18:18.672 --rc genhtml_legend=1 00:18:18.672 --rc geninfo_all_blocks=1 00:18:18.672 --rc geninfo_unexecuted_blocks=1 00:18:18.672 00:18:18.672 ' 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:18.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.672 --rc genhtml_branch_coverage=1 00:18:18.672 --rc genhtml_function_coverage=1 00:18:18.672 --rc genhtml_legend=1 00:18:18.672 --rc geninfo_all_blocks=1 00:18:18.672 --rc geninfo_unexecuted_blocks=1 00:18:18.672 00:18:18.672 ' 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:18.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:18.672 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.673 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:18.673 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:18.673 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:18.673 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:18.673 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:18.673 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.673 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:18.673 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:18.673 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:18.673 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.673 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.673 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.673 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:18.673 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:18.673 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:18.673 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:25.269 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:25.269 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:25.269 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:25.269 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:25.269 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:25.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:25.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:18:25.270 00:18:25.270 --- 10.0.0.2 ping statistics --- 00:18:25.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.270 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:25.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:25.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:18:25.270 00:18:25.270 --- 10.0.0.1 ping statistics --- 00:18:25.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.270 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=307785 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 307785 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 307785 ']' 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.270 09:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.842 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.842 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:25.842 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:25.842 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:25.842 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=308061 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c3cda152cb5c6f3844452a57a376ef0ad942fcc1a182704e 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.KGu 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c3cda152cb5c6f3844452a57a376ef0ad942fcc1a182704e 0 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c3cda152cb5c6f3844452a57a376ef0ad942fcc1a182704e 0 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c3cda152cb5c6f3844452a57a376ef0ad942fcc1a182704e 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.KGu 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.KGu 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.KGu 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:26.104 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3d60afc341051d72f55bce5c47c9abc8f2324f33cb50e5091155f1e5e5da1544 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.20a 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3d60afc341051d72f55bce5c47c9abc8f2324f33cb50e5091155f1e5e5da1544 3 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3d60afc341051d72f55bce5c47c9abc8f2324f33cb50e5091155f1e5e5da1544 3 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3d60afc341051d72f55bce5c47c9abc8f2324f33cb50e5091155f1e5e5da1544 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.20a 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.20a 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.20a 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dedd2c0603432f913cf3c191918e2d9d 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.8Zg 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dedd2c0603432f913cf3c191918e2d9d 1 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dedd2c0603432f913cf3c191918e2d9d 1 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dedd2c0603432f913cf3c191918e2d9d 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.8Zg 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.8Zg 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.8Zg 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6d3cff4254a6abf3f3f0637bdb174274633a659edd425f8b 00:18:26.105 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.qF5 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6d3cff4254a6abf3f3f0637bdb174274633a659edd425f8b 2 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6d3cff4254a6abf3f3f0637bdb174274633a659edd425f8b 2 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6d3cff4254a6abf3f3f0637bdb174274633a659edd425f8b 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.qF5 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.qF5 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.qF5 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0e6b0ef029f17ba6057e8f4567f4b83e1bfc55beebd539af 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.O6V 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0e6b0ef029f17ba6057e8f4567f4b83e1bfc55beebd539af 2 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0e6b0ef029f17ba6057e8f4567f4b83e1bfc55beebd539af 2 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0e6b0ef029f17ba6057e8f4567f4b83e1bfc55beebd539af 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.O6V 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.O6V 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.O6V 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=92ce1721a64456702c621872adabb4c5 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.cqY 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 92ce1721a64456702c621872adabb4c5 1 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 92ce1721a64456702c621872adabb4c5 1 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=92ce1721a64456702c621872adabb4c5 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:26.367 09:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:26.367 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.cqY 00:18:26.367 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.cqY 00:18:26.367 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.cqY 00:18:26.367 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:26.367 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:26.367 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:26.367 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:26.367 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:26.367 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0d861bb565096f264b9b23ccc6a2d0a02428aa305b5aa58bc220ddd6b7c62972 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.tNa 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0d861bb565096f264b9b23ccc6a2d0a02428aa305b5aa58bc220ddd6b7c62972 3 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0d861bb565096f264b9b23ccc6a2d0a02428aa305b5aa58bc220ddd6b7c62972 3 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0d861bb565096f264b9b23ccc6a2d0a02428aa305b5aa58bc220ddd6b7c62972 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.tNa 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.tNa 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.tNa 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 307785 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 307785 ']' 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.368 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.629 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.629 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:26.629 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 308061 /var/tmp/host.sock 00:18:26.629 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 308061 ']' 00:18:26.629 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:26.629 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.629 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:26.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:26.629 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.629 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.891 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.891 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:26.891 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:26.891 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.891 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.891 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.891 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:26.891 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.KGu 00:18:26.891 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.891 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.891 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.891 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.KGu 00:18:26.891 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.KGu 00:18:27.152 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.20a ]] 00:18:27.152 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.20a 00:18:27.152 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.152 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.152 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.152 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.20a 00:18:27.152 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.20a 00:18:27.414 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:27.414 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.8Zg 00:18:27.414 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.414 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.414 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.414 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.8Zg 00:18:27.414 09:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.8Zg 00:18:27.414 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.qF5 ]] 00:18:27.414 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qF5 00:18:27.414 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.414 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.414 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.414 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qF5 00:18:27.414 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qF5 00:18:27.675 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:27.675 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.O6V 00:18:27.676 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.676 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.676 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.676 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.O6V 00:18:27.676 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.O6V 00:18:27.937 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.cqY ]] 00:18:27.937 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cqY 00:18:27.937 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.937 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.937 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.937 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cqY 00:18:27.937 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cqY 00:18:28.198 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:28.198 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tNa 00:18:28.198 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.198 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.198 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.198 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.tNa 00:18:28.198 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.tNa 00:18:28.198 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:28.198 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:28.198 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.198 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.198 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:28.198 09:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:28.460 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:28.460 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.460 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:28.460 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:28.460 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:28.460 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.460 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.460 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.460 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.460 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.460 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.460 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.460 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.722 00:18:28.722 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.722 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.722 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.983 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.983 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.983 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.983 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.983 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.983 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.983 { 00:18:28.983 "cntlid": 1, 00:18:28.983 "qid": 0, 00:18:28.983 "state": "enabled", 00:18:28.983 "thread": "nvmf_tgt_poll_group_000", 00:18:28.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:28.984 "listen_address": { 00:18:28.984 "trtype": "TCP", 00:18:28.984 "adrfam": "IPv4", 00:18:28.984 "traddr": "10.0.0.2", 00:18:28.984 "trsvcid": "4420" 00:18:28.984 }, 00:18:28.984 "peer_address": { 00:18:28.984 "trtype": "TCP", 00:18:28.984 "adrfam": "IPv4", 00:18:28.984 "traddr": "10.0.0.1", 00:18:28.984 "trsvcid": "33740" 00:18:28.984 }, 00:18:28.984 "auth": { 00:18:28.984 "state": "completed", 00:18:28.984 "digest": "sha256", 00:18:28.984 "dhgroup": "null" 00:18:28.984 } 00:18:28.984 } 00:18:28.984 ]' 00:18:28.984 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.984 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.984 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.984 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:28.984 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.984 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.984 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.984 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.245 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:18:29.245 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:18:33.454 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.454 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.455 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.455 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.455 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.455 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.455 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:33.455 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:33.455 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:33.455 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.455 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:33.455 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:33.455 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:33.455 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.455 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.455 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.455 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.455 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.455 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.455 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.455 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.715 00:18:33.715 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.715 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.716 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.716 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.716 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.716 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.716 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.977 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.977 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.977 { 00:18:33.977 "cntlid": 3, 00:18:33.977 "qid": 0, 00:18:33.977 "state": "enabled", 00:18:33.977 "thread": "nvmf_tgt_poll_group_000", 00:18:33.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:33.977 "listen_address": { 00:18:33.977 "trtype": "TCP", 00:18:33.977 "adrfam": "IPv4", 00:18:33.977 "traddr": "10.0.0.2", 00:18:33.977 "trsvcid": "4420" 00:18:33.977 }, 00:18:33.977 "peer_address": { 00:18:33.977 "trtype": "TCP", 00:18:33.977 "adrfam": "IPv4", 00:18:33.977 "traddr": "10.0.0.1", 00:18:33.977 "trsvcid": "33764" 00:18:33.977 }, 00:18:33.977 "auth": { 00:18:33.977 "state": "completed", 00:18:33.977 "digest": "sha256", 00:18:33.977 "dhgroup": "null" 00:18:33.977 } 00:18:33.977 } 00:18:33.977 ]' 00:18:33.977 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.977 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.977 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.977 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:33.977 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.977 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.977 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.977 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.240 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:18:34.240 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:18:34.813 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.813 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.813 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.813 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.813 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.813 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.813 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:34.813 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:35.075 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:35.075 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.075 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:35.075 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:35.075 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:35.075 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.075 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.075 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.075 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.075 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.075 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.075 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.075 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.336 00:18:35.336 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.336 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.336 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.597 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.598 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.598 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.598 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.598 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.598 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.598 { 00:18:35.598 "cntlid": 5, 00:18:35.598 "qid": 0, 00:18:35.598 "state": "enabled", 00:18:35.598 "thread": "nvmf_tgt_poll_group_000", 00:18:35.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:35.598 "listen_address": { 00:18:35.598 "trtype": "TCP", 00:18:35.598 "adrfam": "IPv4", 00:18:35.598 "traddr": "10.0.0.2", 00:18:35.598 "trsvcid": "4420" 00:18:35.598 }, 00:18:35.598 "peer_address": { 00:18:35.598 "trtype": "TCP", 00:18:35.598 "adrfam": "IPv4", 00:18:35.598 "traddr": "10.0.0.1", 00:18:35.598 "trsvcid": "33782" 00:18:35.598 }, 00:18:35.598 "auth": { 00:18:35.598 "state": "completed", 00:18:35.598 "digest": "sha256", 00:18:35.598 "dhgroup": "null" 00:18:35.598 } 00:18:35.598 } 00:18:35.598 ]' 00:18:35.598 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.598 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:35.598 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.598 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:35.598 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.598 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.598 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.598 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.861 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:18:35.861 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:18:36.432 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.432 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.432 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.432 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.432 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.432 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.432 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:36.432 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:36.693 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:36.693 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.693 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:36.693 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:36.693 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:36.693 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.693 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:36.693 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.693 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.693 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.693 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:36.693 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.693 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.693 00:18:36.955 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.955 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.955 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.955 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.955 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.955 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.955 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.955 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.955 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.955 { 00:18:36.955 "cntlid": 7, 00:18:36.955 "qid": 0, 00:18:36.955 "state": "enabled", 00:18:36.955 "thread": "nvmf_tgt_poll_group_000", 00:18:36.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:36.955 "listen_address": { 00:18:36.955 "trtype": "TCP", 00:18:36.955 "adrfam": "IPv4", 00:18:36.955 "traddr": "10.0.0.2", 00:18:36.955 "trsvcid": "4420" 00:18:36.955 }, 00:18:36.955 "peer_address": { 00:18:36.955 "trtype": "TCP", 00:18:36.955 "adrfam": "IPv4", 00:18:36.955 "traddr": "10.0.0.1", 00:18:36.955 "trsvcid": "33814" 00:18:36.955 }, 00:18:36.955 "auth": { 00:18:36.955 "state": "completed", 00:18:36.955 "digest": "sha256", 00:18:36.955 "dhgroup": "null" 00:18:36.955 } 00:18:36.955 } 00:18:36.955 ]' 00:18:36.955 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.955 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.955 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.215 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:37.215 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.215 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.215 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.215 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.215 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:18:37.216 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:18:37.788 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.788 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.788 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.788 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.788 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.788 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.788 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.788 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:37.788 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:38.050 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:38.050 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.050 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:38.050 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:38.050 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:38.050 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.050 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.050 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.050 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.050 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.050 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.050 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.050 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.311 00:18:38.311 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.311 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.311 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.571 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.571 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.571 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.571 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.571 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.571 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.571 { 00:18:38.571 "cntlid": 9, 00:18:38.571 "qid": 0, 00:18:38.571 "state": "enabled", 00:18:38.571 "thread": "nvmf_tgt_poll_group_000", 00:18:38.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:38.571 "listen_address": { 00:18:38.571 "trtype": "TCP", 00:18:38.571 "adrfam": "IPv4", 00:18:38.571 "traddr": "10.0.0.2", 00:18:38.571 "trsvcid": "4420" 00:18:38.571 }, 00:18:38.571 "peer_address": { 00:18:38.571 "trtype": "TCP", 00:18:38.571 "adrfam": "IPv4", 00:18:38.571 "traddr": "10.0.0.1", 00:18:38.571 "trsvcid": "50982" 00:18:38.571 }, 00:18:38.571 "auth": { 00:18:38.571 "state": "completed", 00:18:38.571 "digest": "sha256", 00:18:38.571 "dhgroup": "ffdhe2048" 00:18:38.571 } 00:18:38.571 } 00:18:38.571 ]' 00:18:38.571 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.571 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.571 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.571 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:38.571 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.571 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.572 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.572 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.832 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:18:38.832 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:18:39.403 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.403 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.403 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.403 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.403 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.403 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.403 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:39.403 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:39.664 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:39.664 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.664 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:39.664 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:39.664 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:39.664 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.664 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.664 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.664 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.664 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.664 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.664 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.664 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.925 00:18:39.925 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.925 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.925 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.186 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.186 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.186 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.186 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.186 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.186 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.186 { 00:18:40.186 "cntlid": 11, 00:18:40.186 "qid": 0, 00:18:40.186 "state": "enabled", 00:18:40.186 "thread": "nvmf_tgt_poll_group_000", 00:18:40.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:40.186 "listen_address": { 00:18:40.186 "trtype": "TCP", 00:18:40.186 "adrfam": "IPv4", 00:18:40.186 "traddr": "10.0.0.2", 00:18:40.186 "trsvcid": "4420" 00:18:40.186 }, 00:18:40.186 "peer_address": { 00:18:40.186 "trtype": "TCP", 00:18:40.186 "adrfam": "IPv4", 00:18:40.186 "traddr": "10.0.0.1", 00:18:40.186 "trsvcid": "51008" 00:18:40.186 }, 00:18:40.186 "auth": { 00:18:40.186 "state": "completed", 00:18:40.186 "digest": "sha256", 00:18:40.186 "dhgroup": "ffdhe2048" 00:18:40.186 } 00:18:40.186 } 00:18:40.186 ]' 00:18:40.186 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.186 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.186 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.186 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:40.186 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.186 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.186 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.186 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.446 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:18:40.446 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:18:41.018 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.018 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.018 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.018 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.018 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.018 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.018 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:41.019 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:41.280 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:41.280 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.280 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:41.280 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:41.280 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:41.280 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.280 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.280 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.280 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.280 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.280 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.280 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.280 09:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.541 00:18:41.541 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.541 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.541 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.802 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.802 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.802 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.802 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.802 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.802 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.802 { 00:18:41.802 "cntlid": 13, 00:18:41.802 "qid": 0, 00:18:41.802 "state": "enabled", 00:18:41.802 "thread": "nvmf_tgt_poll_group_000", 00:18:41.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:41.802 "listen_address": { 00:18:41.802 "trtype": "TCP", 00:18:41.802 "adrfam": "IPv4", 00:18:41.802 "traddr": "10.0.0.2", 00:18:41.802 "trsvcid": "4420" 00:18:41.802 }, 00:18:41.802 "peer_address": { 00:18:41.802 "trtype": "TCP", 00:18:41.802 "adrfam": "IPv4", 00:18:41.802 "traddr": "10.0.0.1", 00:18:41.802 "trsvcid": "51034" 00:18:41.802 }, 00:18:41.802 "auth": { 00:18:41.802 "state": "completed", 00:18:41.802 "digest": "sha256", 00:18:41.802 "dhgroup": "ffdhe2048" 00:18:41.802 } 00:18:41.802 } 00:18:41.802 ]' 00:18:41.802 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.802 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.802 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.802 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:41.802 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.802 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.802 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.802 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.063 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:18:42.063 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:18:42.635 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.635 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.635 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.635 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.635 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.635 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.635 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:42.635 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:42.899 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:42.899 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.899 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:42.899 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:42.899 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:42.899 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.899 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:42.899 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.899 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.899 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.899 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:42.899 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.899 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.160 00:18:43.160 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.160 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.160 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.421 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.421 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.421 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.421 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.421 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.421 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.421 { 00:18:43.421 "cntlid": 15, 00:18:43.421 "qid": 0, 00:18:43.421 "state": "enabled", 00:18:43.421 "thread": "nvmf_tgt_poll_group_000", 00:18:43.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:43.421 "listen_address": { 00:18:43.421 "trtype": "TCP", 00:18:43.421 "adrfam": "IPv4", 00:18:43.421 "traddr": "10.0.0.2", 00:18:43.421 "trsvcid": "4420" 00:18:43.421 }, 00:18:43.421 "peer_address": { 00:18:43.421 "trtype": "TCP", 00:18:43.421 "adrfam": "IPv4", 00:18:43.421 "traddr": "10.0.0.1", 00:18:43.421 "trsvcid": "51070" 00:18:43.421 }, 00:18:43.421 "auth": { 00:18:43.421 "state": "completed", 00:18:43.421 "digest": "sha256", 00:18:43.421 "dhgroup": "ffdhe2048" 00:18:43.421 } 00:18:43.421 } 00:18:43.421 ]' 00:18:43.421 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.421 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.421 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.421 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:43.421 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.421 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.421 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.421 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.681 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:18:43.681 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:18:44.253 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.253 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.253 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.253 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.253 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.253 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.253 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.253 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:44.253 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:44.515 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:44.515 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.515 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:44.515 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:44.515 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:44.515 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.515 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.515 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.515 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.515 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.515 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.515 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.515 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.775 00:18:44.775 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.775 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.775 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.036 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.036 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.036 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.036 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.036 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.036 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.036 { 00:18:45.036 "cntlid": 17, 00:18:45.036 "qid": 0, 00:18:45.036 "state": "enabled", 00:18:45.036 "thread": "nvmf_tgt_poll_group_000", 00:18:45.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:45.036 "listen_address": { 00:18:45.036 "trtype": "TCP", 00:18:45.036 "adrfam": "IPv4", 00:18:45.036 "traddr": "10.0.0.2", 00:18:45.036 "trsvcid": "4420" 00:18:45.036 }, 00:18:45.036 "peer_address": { 00:18:45.036 "trtype": "TCP", 00:18:45.036 "adrfam": "IPv4", 00:18:45.036 "traddr": "10.0.0.1", 00:18:45.036 "trsvcid": "51106" 00:18:45.036 }, 00:18:45.036 "auth": { 00:18:45.036 "state": "completed", 00:18:45.036 "digest": "sha256", 00:18:45.036 "dhgroup": "ffdhe3072" 00:18:45.036 } 00:18:45.036 } 00:18:45.036 ]' 00:18:45.036 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.036 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.036 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.036 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:45.036 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.036 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.036 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.036 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.297 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:18:45.297 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:18:45.868 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.130 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.391 00:18:46.391 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.391 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.391 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.652 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.652 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.652 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.652 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.652 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.652 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.652 { 00:18:46.652 "cntlid": 19, 00:18:46.652 "qid": 0, 00:18:46.652 "state": "enabled", 00:18:46.652 "thread": "nvmf_tgt_poll_group_000", 00:18:46.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:46.652 "listen_address": { 00:18:46.652 "trtype": "TCP", 00:18:46.652 "adrfam": "IPv4", 00:18:46.652 "traddr": "10.0.0.2", 00:18:46.652 "trsvcid": "4420" 00:18:46.652 }, 00:18:46.652 "peer_address": { 00:18:46.652 "trtype": "TCP", 00:18:46.652 "adrfam": "IPv4", 00:18:46.652 "traddr": "10.0.0.1", 00:18:46.652 "trsvcid": "51138" 00:18:46.652 }, 00:18:46.652 "auth": { 00:18:46.652 "state": "completed", 00:18:46.652 "digest": "sha256", 00:18:46.652 "dhgroup": "ffdhe3072" 00:18:46.652 } 00:18:46.652 } 00:18:46.652 ]' 00:18:46.652 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.652 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.652 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.652 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:46.652 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.652 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.652 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.652 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.913 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:18:46.913 09:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:18:47.485 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.485 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.485 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.485 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.485 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.485 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.485 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:47.485 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:47.746 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:47.746 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.746 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:47.746 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:47.746 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:47.746 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.746 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.746 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.746 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.746 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.746 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.746 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.746 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.007 00:18:48.007 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.007 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.007 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.268 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.268 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.268 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.268 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.268 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.268 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.268 { 00:18:48.268 "cntlid": 21, 00:18:48.268 "qid": 0, 00:18:48.268 "state": "enabled", 00:18:48.268 "thread": "nvmf_tgt_poll_group_000", 00:18:48.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:48.269 "listen_address": { 00:18:48.269 "trtype": "TCP", 00:18:48.269 "adrfam": "IPv4", 00:18:48.269 "traddr": "10.0.0.2", 00:18:48.269 "trsvcid": "4420" 00:18:48.269 }, 00:18:48.269 "peer_address": { 00:18:48.269 "trtype": "TCP", 00:18:48.269 "adrfam": "IPv4", 00:18:48.269 "traddr": "10.0.0.1", 00:18:48.269 "trsvcid": "51172" 00:18:48.269 }, 00:18:48.269 "auth": { 00:18:48.269 "state": "completed", 00:18:48.269 "digest": "sha256", 00:18:48.269 "dhgroup": "ffdhe3072" 00:18:48.269 } 00:18:48.269 } 00:18:48.269 ]' 00:18:48.269 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.269 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.269 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.269 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:48.269 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.269 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.269 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.530 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.530 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:18:48.530 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:18:49.101 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.101 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.363 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.363 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.363 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.363 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.363 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:49.363 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:49.363 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:49.363 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.363 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:49.363 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:49.363 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:49.363 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.363 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:49.363 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.363 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.363 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.363 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:49.363 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:49.363 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:49.625 00:18:49.625 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.625 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.625 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.885 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.885 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.885 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.885 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.885 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.885 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.885 { 00:18:49.885 "cntlid": 23, 00:18:49.885 "qid": 0, 00:18:49.885 "state": "enabled", 00:18:49.885 "thread": "nvmf_tgt_poll_group_000", 00:18:49.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:49.885 "listen_address": { 00:18:49.885 "trtype": "TCP", 00:18:49.885 "adrfam": "IPv4", 00:18:49.885 "traddr": "10.0.0.2", 00:18:49.885 "trsvcid": "4420" 00:18:49.885 }, 00:18:49.885 "peer_address": { 00:18:49.885 "trtype": "TCP", 00:18:49.885 "adrfam": "IPv4", 00:18:49.885 "traddr": "10.0.0.1", 00:18:49.885 "trsvcid": "53662" 00:18:49.885 }, 00:18:49.885 "auth": { 00:18:49.885 "state": "completed", 00:18:49.885 "digest": "sha256", 00:18:49.885 "dhgroup": "ffdhe3072" 00:18:49.885 } 00:18:49.885 } 00:18:49.885 ]' 00:18:49.885 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.885 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.885 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.885 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:49.885 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.885 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.885 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.885 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.146 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:18:50.146 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:18:50.717 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.717 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.717 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.717 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.717 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.717 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.717 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.717 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:50.717 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:50.978 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:50.978 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.978 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:50.978 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:50.978 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:50.978 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.978 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.978 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.978 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.978 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.978 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.978 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.978 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.240 00:18:51.240 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.240 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.240 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.501 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.501 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.501 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.501 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.501 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.501 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.501 { 00:18:51.501 "cntlid": 25, 00:18:51.501 "qid": 0, 00:18:51.501 "state": "enabled", 00:18:51.501 "thread": "nvmf_tgt_poll_group_000", 00:18:51.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:51.501 "listen_address": { 00:18:51.501 "trtype": "TCP", 00:18:51.501 "adrfam": "IPv4", 00:18:51.501 "traddr": "10.0.0.2", 00:18:51.501 "trsvcid": "4420" 00:18:51.501 }, 00:18:51.501 "peer_address": { 00:18:51.501 "trtype": "TCP", 00:18:51.501 "adrfam": "IPv4", 00:18:51.501 "traddr": "10.0.0.1", 00:18:51.501 "trsvcid": "53668" 00:18:51.501 }, 00:18:51.501 "auth": { 00:18:51.501 "state": "completed", 00:18:51.501 "digest": "sha256", 00:18:51.501 "dhgroup": "ffdhe4096" 00:18:51.501 } 00:18:51.501 } 00:18:51.501 ]' 00:18:51.501 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.501 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.501 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.501 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.501 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.501 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.501 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.501 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.763 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:18:51.763 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:18:52.336 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.336 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.336 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.336 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.336 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.336 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.336 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:52.336 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:52.596 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:52.596 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.596 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:52.596 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:52.596 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:52.596 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.596 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.596 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.596 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.596 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.596 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.596 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.596 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.857 00:18:52.857 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.857 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.857 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.119 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.119 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.119 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.119 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.119 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.119 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.119 { 00:18:53.119 "cntlid": 27, 00:18:53.119 "qid": 0, 00:18:53.119 "state": "enabled", 00:18:53.119 "thread": "nvmf_tgt_poll_group_000", 00:18:53.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:53.119 "listen_address": { 00:18:53.119 "trtype": "TCP", 00:18:53.119 "adrfam": "IPv4", 00:18:53.119 "traddr": "10.0.0.2", 00:18:53.119 "trsvcid": "4420" 00:18:53.119 }, 00:18:53.119 "peer_address": { 00:18:53.119 "trtype": "TCP", 00:18:53.119 "adrfam": "IPv4", 00:18:53.119 "traddr": "10.0.0.1", 00:18:53.119 "trsvcid": "53686" 00:18:53.119 }, 00:18:53.119 "auth": { 00:18:53.119 "state": "completed", 00:18:53.119 "digest": "sha256", 00:18:53.119 "dhgroup": "ffdhe4096" 00:18:53.119 } 00:18:53.119 } 00:18:53.119 ]' 00:18:53.119 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.119 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.119 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.119 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:53.119 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.119 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.119 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.119 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.380 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:18:53.380 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:18:53.952 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.952 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:53.952 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.952 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.952 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.952 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.952 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:53.952 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.213 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:54.213 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.213 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:54.213 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:54.213 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:54.213 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.213 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.213 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.213 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.213 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.213 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.213 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.213 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.474 00:18:54.474 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.474 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.474 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.735 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.735 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.735 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.735 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.735 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.735 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.735 { 00:18:54.735 "cntlid": 29, 00:18:54.735 "qid": 0, 00:18:54.735 "state": "enabled", 00:18:54.735 "thread": "nvmf_tgt_poll_group_000", 00:18:54.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:54.735 "listen_address": { 00:18:54.735 "trtype": "TCP", 00:18:54.735 "adrfam": "IPv4", 00:18:54.735 "traddr": "10.0.0.2", 00:18:54.735 "trsvcid": "4420" 00:18:54.735 }, 00:18:54.735 "peer_address": { 00:18:54.735 "trtype": "TCP", 00:18:54.735 "adrfam": "IPv4", 00:18:54.735 "traddr": "10.0.0.1", 00:18:54.735 "trsvcid": "53716" 00:18:54.735 }, 00:18:54.735 "auth": { 00:18:54.735 "state": "completed", 00:18:54.735 "digest": "sha256", 00:18:54.735 "dhgroup": "ffdhe4096" 00:18:54.735 } 00:18:54.735 } 00:18:54.735 ]' 00:18:54.735 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.735 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.735 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.735 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:54.735 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.735 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.735 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.735 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.997 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:18:54.997 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:18:55.569 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.569 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.569 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.569 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.569 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.569 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.569 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:55.569 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:55.830 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:55.830 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.830 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:55.830 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:55.830 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:55.830 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.830 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:55.830 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.830 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.830 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.830 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:55.830 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:55.830 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.091 00:18:56.091 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.091 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.091 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.352 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.352 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.352 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.352 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.352 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.352 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.352 { 00:18:56.352 "cntlid": 31, 00:18:56.352 "qid": 0, 00:18:56.352 "state": "enabled", 00:18:56.352 "thread": "nvmf_tgt_poll_group_000", 00:18:56.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:56.352 "listen_address": { 00:18:56.352 "trtype": "TCP", 00:18:56.352 "adrfam": "IPv4", 00:18:56.352 "traddr": "10.0.0.2", 00:18:56.352 "trsvcid": "4420" 00:18:56.352 }, 00:18:56.352 "peer_address": { 00:18:56.352 "trtype": "TCP", 00:18:56.352 "adrfam": "IPv4", 00:18:56.352 "traddr": "10.0.0.1", 00:18:56.352 "trsvcid": "53738" 00:18:56.352 }, 00:18:56.352 "auth": { 00:18:56.352 "state": "completed", 00:18:56.352 "digest": "sha256", 00:18:56.352 "dhgroup": "ffdhe4096" 00:18:56.352 } 00:18:56.352 } 00:18:56.352 ]' 00:18:56.352 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.352 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.352 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.352 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:56.352 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.352 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.352 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.352 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.613 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:18:56.613 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:18:57.184 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.184 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.184 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.184 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.184 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.184 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.184 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.184 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:57.184 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:57.444 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:57.444 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.444 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:57.444 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:57.445 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:57.445 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.445 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.445 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.445 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.445 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.445 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.445 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.445 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.706 00:18:57.968 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.968 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.968 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.968 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.968 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.968 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.968 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.968 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.968 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.968 { 00:18:57.968 "cntlid": 33, 00:18:57.968 "qid": 0, 00:18:57.968 "state": "enabled", 00:18:57.968 "thread": "nvmf_tgt_poll_group_000", 00:18:57.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:57.968 "listen_address": { 00:18:57.968 "trtype": "TCP", 00:18:57.968 "adrfam": "IPv4", 00:18:57.968 "traddr": "10.0.0.2", 00:18:57.968 "trsvcid": "4420" 00:18:57.968 }, 00:18:57.968 "peer_address": { 00:18:57.968 "trtype": "TCP", 00:18:57.968 "adrfam": "IPv4", 00:18:57.968 "traddr": "10.0.0.1", 00:18:57.968 "trsvcid": "53764" 00:18:57.968 }, 00:18:57.968 "auth": { 00:18:57.968 "state": "completed", 00:18:57.968 "digest": "sha256", 00:18:57.968 "dhgroup": "ffdhe6144" 00:18:57.968 } 00:18:57.968 } 00:18:57.968 ]' 00:18:57.968 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.968 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.968 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.230 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:58.230 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.230 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.230 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.230 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.230 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:18:58.230 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.174 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.436 00:18:59.436 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.436 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.436 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.698 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.698 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.698 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.698 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.698 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.698 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.698 { 00:18:59.698 "cntlid": 35, 00:18:59.698 "qid": 0, 00:18:59.698 "state": "enabled", 00:18:59.698 "thread": "nvmf_tgt_poll_group_000", 00:18:59.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:59.698 "listen_address": { 00:18:59.698 "trtype": "TCP", 00:18:59.698 "adrfam": "IPv4", 00:18:59.698 "traddr": "10.0.0.2", 00:18:59.698 "trsvcid": "4420" 00:18:59.698 }, 00:18:59.698 "peer_address": { 00:18:59.698 "trtype": "TCP", 00:18:59.698 "adrfam": "IPv4", 00:18:59.698 "traddr": "10.0.0.1", 00:18:59.698 "trsvcid": "40704" 00:18:59.698 }, 00:18:59.698 "auth": { 00:18:59.698 "state": "completed", 00:18:59.698 "digest": "sha256", 00:18:59.698 "dhgroup": "ffdhe6144" 00:18:59.698 } 00:18:59.698 } 00:18:59.698 ]' 00:18:59.698 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.698 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.698 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.959 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:59.959 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.959 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.959 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.959 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.959 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:18:59.959 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.904 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.165 00:19:01.165 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.165 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.165 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.426 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.426 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.426 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.426 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.426 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.426 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.426 { 00:19:01.426 "cntlid": 37, 00:19:01.426 "qid": 0, 00:19:01.426 "state": "enabled", 00:19:01.426 "thread": "nvmf_tgt_poll_group_000", 00:19:01.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:01.426 "listen_address": { 00:19:01.426 "trtype": "TCP", 00:19:01.426 "adrfam": "IPv4", 00:19:01.426 "traddr": "10.0.0.2", 00:19:01.426 "trsvcid": "4420" 00:19:01.426 }, 00:19:01.426 "peer_address": { 00:19:01.426 "trtype": "TCP", 00:19:01.426 "adrfam": "IPv4", 00:19:01.426 "traddr": "10.0.0.1", 00:19:01.426 "trsvcid": "40726" 00:19:01.426 }, 00:19:01.426 "auth": { 00:19:01.426 "state": "completed", 00:19:01.426 "digest": "sha256", 00:19:01.426 "dhgroup": "ffdhe6144" 00:19:01.426 } 00:19:01.426 } 00:19:01.426 ]' 00:19:01.426 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.426 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.426 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.426 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:01.426 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.687 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.687 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.687 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.687 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:01.687 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.631 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.890 00:19:02.890 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.890 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.890 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.151 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.151 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.151 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.151 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.151 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.151 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.151 { 00:19:03.151 "cntlid": 39, 00:19:03.151 "qid": 0, 00:19:03.151 "state": "enabled", 00:19:03.151 "thread": "nvmf_tgt_poll_group_000", 00:19:03.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:03.151 "listen_address": { 00:19:03.151 "trtype": "TCP", 00:19:03.151 "adrfam": "IPv4", 00:19:03.151 "traddr": "10.0.0.2", 00:19:03.151 "trsvcid": "4420" 00:19:03.151 }, 00:19:03.151 "peer_address": { 00:19:03.151 "trtype": "TCP", 00:19:03.151 "adrfam": "IPv4", 00:19:03.151 "traddr": "10.0.0.1", 00:19:03.151 "trsvcid": "40760" 00:19:03.151 }, 00:19:03.151 "auth": { 00:19:03.151 "state": "completed", 00:19:03.151 "digest": "sha256", 00:19:03.151 "dhgroup": "ffdhe6144" 00:19:03.151 } 00:19:03.151 } 00:19:03.151 ]' 00:19:03.151 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.151 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.151 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.151 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:03.152 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.152 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.152 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.152 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.412 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:03.412 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:03.984 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.984 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.984 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.984 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.984 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.984 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.984 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.984 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:03.984 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:04.245 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:04.245 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.245 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:04.245 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:04.245 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:04.245 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.245 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.245 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.245 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.245 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.245 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.245 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.245 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.817 00:19:04.817 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.817 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.817 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.077 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.077 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.077 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.077 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.078 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.078 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.078 { 00:19:05.078 "cntlid": 41, 00:19:05.078 "qid": 0, 00:19:05.078 "state": "enabled", 00:19:05.078 "thread": "nvmf_tgt_poll_group_000", 00:19:05.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:05.078 "listen_address": { 00:19:05.078 "trtype": "TCP", 00:19:05.078 "adrfam": "IPv4", 00:19:05.078 "traddr": "10.0.0.2", 00:19:05.078 "trsvcid": "4420" 00:19:05.078 }, 00:19:05.078 "peer_address": { 00:19:05.078 "trtype": "TCP", 00:19:05.078 "adrfam": "IPv4", 00:19:05.078 "traddr": "10.0.0.1", 00:19:05.078 "trsvcid": "40788" 00:19:05.078 }, 00:19:05.078 "auth": { 00:19:05.078 "state": "completed", 00:19:05.078 "digest": "sha256", 00:19:05.078 "dhgroup": "ffdhe8192" 00:19:05.078 } 00:19:05.078 } 00:19:05.078 ]' 00:19:05.078 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.078 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.078 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.078 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.078 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.078 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.078 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.078 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.339 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:05.339 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:05.910 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.910 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.910 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.910 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.910 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.910 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.910 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:05.910 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:06.171 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:06.171 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.171 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:06.171 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:06.171 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:06.171 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.171 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.171 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.171 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.171 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.171 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.171 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.171 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.744 00:19:06.744 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.744 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.745 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.745 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.745 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.745 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.745 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.745 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.745 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.745 { 00:19:06.745 "cntlid": 43, 00:19:06.745 "qid": 0, 00:19:06.745 "state": "enabled", 00:19:06.745 "thread": "nvmf_tgt_poll_group_000", 00:19:06.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:06.745 "listen_address": { 00:19:06.745 "trtype": "TCP", 00:19:06.745 "adrfam": "IPv4", 00:19:06.745 "traddr": "10.0.0.2", 00:19:06.745 "trsvcid": "4420" 00:19:06.745 }, 00:19:06.745 "peer_address": { 00:19:06.745 "trtype": "TCP", 00:19:06.745 "adrfam": "IPv4", 00:19:06.745 "traddr": "10.0.0.1", 00:19:06.745 "trsvcid": "40820" 00:19:06.745 }, 00:19:06.745 "auth": { 00:19:06.745 "state": "completed", 00:19:06.745 "digest": "sha256", 00:19:06.745 "dhgroup": "ffdhe8192" 00:19:06.745 } 00:19:06.745 } 00:19:06.745 ]' 00:19:06.745 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.745 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.745 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.006 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:07.007 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.007 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.007 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.007 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.007 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:19:07.007 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:19:07.948 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.948 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.948 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.948 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.948 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.948 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.948 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:07.948 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:07.948 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:07.949 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.949 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:07.949 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:07.949 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:07.949 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.949 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.949 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.949 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.949 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.949 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.949 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.949 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.522 00:19:08.522 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.522 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.522 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.522 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.522 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.522 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.522 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.522 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.522 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.522 { 00:19:08.522 "cntlid": 45, 00:19:08.522 "qid": 0, 00:19:08.522 "state": "enabled", 00:19:08.522 "thread": "nvmf_tgt_poll_group_000", 00:19:08.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:08.522 "listen_address": { 00:19:08.522 "trtype": "TCP", 00:19:08.522 "adrfam": "IPv4", 00:19:08.522 "traddr": "10.0.0.2", 00:19:08.522 "trsvcid": "4420" 00:19:08.522 }, 00:19:08.522 "peer_address": { 00:19:08.522 "trtype": "TCP", 00:19:08.522 "adrfam": "IPv4", 00:19:08.522 "traddr": "10.0.0.1", 00:19:08.522 "trsvcid": "50906" 00:19:08.522 }, 00:19:08.522 "auth": { 00:19:08.522 "state": "completed", 00:19:08.522 "digest": "sha256", 00:19:08.522 "dhgroup": "ffdhe8192" 00:19:08.522 } 00:19:08.522 } 00:19:08.522 ]' 00:19:08.522 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.782 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.782 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.782 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:08.782 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.782 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.782 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.782 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.044 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:09.044 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:09.615 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.615 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.615 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.615 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.615 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.615 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.615 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:09.615 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:09.875 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:09.875 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.875 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:09.875 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:09.875 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:09.875 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.875 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:09.875 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.875 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.875 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.875 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:09.876 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:09.876 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.136 00:19:10.396 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.396 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.396 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.396 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.396 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.396 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.396 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.396 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.396 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.396 { 00:19:10.396 "cntlid": 47, 00:19:10.396 "qid": 0, 00:19:10.396 "state": "enabled", 00:19:10.396 "thread": "nvmf_tgt_poll_group_000", 00:19:10.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:10.396 "listen_address": { 00:19:10.396 "trtype": "TCP", 00:19:10.396 "adrfam": "IPv4", 00:19:10.396 "traddr": "10.0.0.2", 00:19:10.396 "trsvcid": "4420" 00:19:10.396 }, 00:19:10.396 "peer_address": { 00:19:10.396 "trtype": "TCP", 00:19:10.396 "adrfam": "IPv4", 00:19:10.396 "traddr": "10.0.0.1", 00:19:10.396 "trsvcid": "50928" 00:19:10.396 }, 00:19:10.396 "auth": { 00:19:10.396 "state": "completed", 00:19:10.396 "digest": "sha256", 00:19:10.396 "dhgroup": "ffdhe8192" 00:19:10.396 } 00:19:10.396 } 00:19:10.396 ]' 00:19:10.396 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.396 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.396 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.656 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:10.656 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.656 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.656 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.656 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.656 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:10.656 09:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:11.595 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.595 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:11.595 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.596 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.857 00:19:11.857 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.857 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.857 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.118 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.118 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.118 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.118 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.118 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.118 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.118 { 00:19:12.118 "cntlid": 49, 00:19:12.118 "qid": 0, 00:19:12.118 "state": "enabled", 00:19:12.118 "thread": "nvmf_tgt_poll_group_000", 00:19:12.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:12.118 "listen_address": { 00:19:12.118 "trtype": "TCP", 00:19:12.118 "adrfam": "IPv4", 00:19:12.118 "traddr": "10.0.0.2", 00:19:12.118 "trsvcid": "4420" 00:19:12.118 }, 00:19:12.118 "peer_address": { 00:19:12.118 "trtype": "TCP", 00:19:12.118 "adrfam": "IPv4", 00:19:12.118 "traddr": "10.0.0.1", 00:19:12.118 "trsvcid": "50960" 00:19:12.118 }, 00:19:12.118 "auth": { 00:19:12.118 "state": "completed", 00:19:12.118 "digest": "sha384", 00:19:12.118 "dhgroup": "null" 00:19:12.118 } 00:19:12.118 } 00:19:12.118 ]' 00:19:12.118 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.118 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.118 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.118 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:12.118 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.118 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.118 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.118 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.381 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:12.381 09:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:12.951 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.951 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.951 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.951 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.951 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.951 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.951 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:12.951 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:13.212 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:13.212 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.212 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:13.212 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:13.212 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:13.212 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.212 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.212 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.212 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.212 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.212 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.212 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.212 09:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.490 00:19:13.490 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.490 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.490 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.490 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.490 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.490 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.490 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.753 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.753 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.753 { 00:19:13.753 "cntlid": 51, 00:19:13.753 "qid": 0, 00:19:13.753 "state": "enabled", 00:19:13.753 "thread": "nvmf_tgt_poll_group_000", 00:19:13.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:13.753 "listen_address": { 00:19:13.753 "trtype": "TCP", 00:19:13.753 "adrfam": "IPv4", 00:19:13.753 "traddr": "10.0.0.2", 00:19:13.753 "trsvcid": "4420" 00:19:13.753 }, 00:19:13.753 "peer_address": { 00:19:13.753 "trtype": "TCP", 00:19:13.753 "adrfam": "IPv4", 00:19:13.753 "traddr": "10.0.0.1", 00:19:13.753 "trsvcid": "50998" 00:19:13.753 }, 00:19:13.753 "auth": { 00:19:13.753 "state": "completed", 00:19:13.753 "digest": "sha384", 00:19:13.753 "dhgroup": "null" 00:19:13.753 } 00:19:13.753 } 00:19:13.753 ]' 00:19:13.753 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.753 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.753 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.753 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:13.753 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.753 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.753 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.753 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.014 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:19:14.014 09:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:19:14.587 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.587 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.587 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.587 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.587 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.587 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.587 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:14.587 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:14.848 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:14.848 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.848 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:14.848 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:14.848 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:14.848 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.848 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.848 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.848 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.848 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.848 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.848 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.848 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.111 00:19:15.111 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.111 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.111 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.111 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.111 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.111 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.111 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.111 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.111 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.111 { 00:19:15.111 "cntlid": 53, 00:19:15.111 "qid": 0, 00:19:15.111 "state": "enabled", 00:19:15.111 "thread": "nvmf_tgt_poll_group_000", 00:19:15.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:15.111 "listen_address": { 00:19:15.111 "trtype": "TCP", 00:19:15.111 "adrfam": "IPv4", 00:19:15.111 "traddr": "10.0.0.2", 00:19:15.111 "trsvcid": "4420" 00:19:15.111 }, 00:19:15.111 "peer_address": { 00:19:15.111 "trtype": "TCP", 00:19:15.111 "adrfam": "IPv4", 00:19:15.111 "traddr": "10.0.0.1", 00:19:15.111 "trsvcid": "51018" 00:19:15.111 }, 00:19:15.111 "auth": { 00:19:15.111 "state": "completed", 00:19:15.111 "digest": "sha384", 00:19:15.111 "dhgroup": "null" 00:19:15.111 } 00:19:15.111 } 00:19:15.111 ]' 00:19:15.111 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.372 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.372 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.372 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:15.372 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.372 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.372 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.372 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.634 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:15.634 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:16.206 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.206 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.207 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.207 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.207 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.207 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.207 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:16.207 09:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:16.467 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:16.467 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.467 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:16.467 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:16.467 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:16.467 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.467 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:16.467 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.467 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.467 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.467 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:16.467 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:16.467 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:16.728 00:19:16.728 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.728 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.728 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.728 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.728 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.728 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.728 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.728 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.728 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.728 { 00:19:16.728 "cntlid": 55, 00:19:16.728 "qid": 0, 00:19:16.728 "state": "enabled", 00:19:16.729 "thread": "nvmf_tgt_poll_group_000", 00:19:16.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:16.729 "listen_address": { 00:19:16.729 "trtype": "TCP", 00:19:16.729 "adrfam": "IPv4", 00:19:16.729 "traddr": "10.0.0.2", 00:19:16.729 "trsvcid": "4420" 00:19:16.729 }, 00:19:16.729 "peer_address": { 00:19:16.729 "trtype": "TCP", 00:19:16.729 "adrfam": "IPv4", 00:19:16.729 "traddr": "10.0.0.1", 00:19:16.729 "trsvcid": "51046" 00:19:16.729 }, 00:19:16.729 "auth": { 00:19:16.729 "state": "completed", 00:19:16.729 "digest": "sha384", 00:19:16.729 "dhgroup": "null" 00:19:16.729 } 00:19:16.729 } 00:19:16.729 ]' 00:19:16.729 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.990 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:16.990 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.990 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:16.990 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.990 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.990 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.990 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.252 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:17.252 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:17.823 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.823 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.823 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.823 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.823 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.823 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.823 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.823 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:17.823 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:18.084 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:18.084 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.084 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:18.084 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:18.084 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:18.084 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.084 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.084 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.084 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.084 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.084 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.084 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.084 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.084 00:19:18.345 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.345 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.345 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.345 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.346 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.346 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.346 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.346 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.346 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.346 { 00:19:18.346 "cntlid": 57, 00:19:18.346 "qid": 0, 00:19:18.346 "state": "enabled", 00:19:18.346 "thread": "nvmf_tgt_poll_group_000", 00:19:18.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:18.346 "listen_address": { 00:19:18.346 "trtype": "TCP", 00:19:18.346 "adrfam": "IPv4", 00:19:18.346 "traddr": "10.0.0.2", 00:19:18.346 "trsvcid": "4420" 00:19:18.346 }, 00:19:18.346 "peer_address": { 00:19:18.346 "trtype": "TCP", 00:19:18.346 "adrfam": "IPv4", 00:19:18.346 "traddr": "10.0.0.1", 00:19:18.346 "trsvcid": "55944" 00:19:18.346 }, 00:19:18.346 "auth": { 00:19:18.346 "state": "completed", 00:19:18.346 "digest": "sha384", 00:19:18.346 "dhgroup": "ffdhe2048" 00:19:18.346 } 00:19:18.346 } 00:19:18.346 ]' 00:19:18.346 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.607 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.607 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.607 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:18.607 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.607 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.607 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.607 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.607 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:18.607 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:19.551 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.551 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.551 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.551 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.551 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.551 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.551 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:19.551 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:19.551 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:19.551 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.551 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:19.551 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:19.551 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:19.551 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.551 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.551 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.551 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.551 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.551 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.551 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.551 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.811 00:19:19.812 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.812 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.812 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.074 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.074 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.074 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.074 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.074 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.074 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.074 { 00:19:20.074 "cntlid": 59, 00:19:20.074 "qid": 0, 00:19:20.074 "state": "enabled", 00:19:20.074 "thread": "nvmf_tgt_poll_group_000", 00:19:20.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:20.074 "listen_address": { 00:19:20.074 "trtype": "TCP", 00:19:20.074 "adrfam": "IPv4", 00:19:20.074 "traddr": "10.0.0.2", 00:19:20.074 "trsvcid": "4420" 00:19:20.074 }, 00:19:20.074 "peer_address": { 00:19:20.074 "trtype": "TCP", 00:19:20.074 "adrfam": "IPv4", 00:19:20.074 "traddr": "10.0.0.1", 00:19:20.074 "trsvcid": "55984" 00:19:20.074 }, 00:19:20.074 "auth": { 00:19:20.074 "state": "completed", 00:19:20.074 "digest": "sha384", 00:19:20.074 "dhgroup": "ffdhe2048" 00:19:20.074 } 00:19:20.074 } 00:19:20.074 ]' 00:19:20.074 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.074 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.074 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.074 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:20.074 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.074 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.074 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.074 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.335 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:19:20.335 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:19:20.907 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.908 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.908 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.908 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.908 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.908 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.908 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:20.908 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:21.168 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:21.168 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.168 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:21.168 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:21.168 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:21.168 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.169 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.169 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.169 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.169 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.169 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.169 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.169 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.430 00:19:21.430 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.430 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.430 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.692 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.692 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.692 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.692 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.692 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.692 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.692 { 00:19:21.692 "cntlid": 61, 00:19:21.692 "qid": 0, 00:19:21.692 "state": "enabled", 00:19:21.692 "thread": "nvmf_tgt_poll_group_000", 00:19:21.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:21.692 "listen_address": { 00:19:21.692 "trtype": "TCP", 00:19:21.692 "adrfam": "IPv4", 00:19:21.692 "traddr": "10.0.0.2", 00:19:21.692 "trsvcid": "4420" 00:19:21.692 }, 00:19:21.692 "peer_address": { 00:19:21.692 "trtype": "TCP", 00:19:21.692 "adrfam": "IPv4", 00:19:21.692 "traddr": "10.0.0.1", 00:19:21.692 "trsvcid": "56006" 00:19:21.692 }, 00:19:21.692 "auth": { 00:19:21.692 "state": "completed", 00:19:21.692 "digest": "sha384", 00:19:21.692 "dhgroup": "ffdhe2048" 00:19:21.692 } 00:19:21.692 } 00:19:21.692 ]' 00:19:21.692 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.692 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:21.692 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.692 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:21.692 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.692 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.692 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.692 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.953 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:21.953 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:22.525 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.525 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:22.525 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.525 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.525 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.525 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.525 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:22.525 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:22.786 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:22.786 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.786 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:22.786 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:22.786 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:22.786 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.786 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:22.786 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.786 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.786 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.786 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:22.786 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.786 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.046 00:19:23.046 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.046 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.046 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.307 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.307 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.307 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.307 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.307 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.307 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.307 { 00:19:23.307 "cntlid": 63, 00:19:23.307 "qid": 0, 00:19:23.307 "state": "enabled", 00:19:23.307 "thread": "nvmf_tgt_poll_group_000", 00:19:23.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:23.307 "listen_address": { 00:19:23.307 "trtype": "TCP", 00:19:23.307 "adrfam": "IPv4", 00:19:23.307 "traddr": "10.0.0.2", 00:19:23.307 "trsvcid": "4420" 00:19:23.307 }, 00:19:23.307 "peer_address": { 00:19:23.307 "trtype": "TCP", 00:19:23.307 "adrfam": "IPv4", 00:19:23.307 "traddr": "10.0.0.1", 00:19:23.307 "trsvcid": "56026" 00:19:23.307 }, 00:19:23.307 "auth": { 00:19:23.307 "state": "completed", 00:19:23.307 "digest": "sha384", 00:19:23.307 "dhgroup": "ffdhe2048" 00:19:23.307 } 00:19:23.307 } 00:19:23.307 ]' 00:19:23.307 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.307 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:23.307 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.307 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:23.307 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.307 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.307 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.307 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.569 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:23.569 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:24.142 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.142 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.142 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.142 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.142 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.142 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.142 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.142 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:24.142 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:24.403 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:24.403 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.403 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:24.403 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:24.403 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:24.403 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.403 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.403 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.403 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.403 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.403 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.403 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.403 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.664 00:19:24.664 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.664 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.664 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.925 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.925 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.925 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.925 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.925 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.925 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.925 { 00:19:24.925 "cntlid": 65, 00:19:24.925 "qid": 0, 00:19:24.925 "state": "enabled", 00:19:24.925 "thread": "nvmf_tgt_poll_group_000", 00:19:24.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:24.925 "listen_address": { 00:19:24.925 "trtype": "TCP", 00:19:24.925 "adrfam": "IPv4", 00:19:24.925 "traddr": "10.0.0.2", 00:19:24.925 "trsvcid": "4420" 00:19:24.925 }, 00:19:24.925 "peer_address": { 00:19:24.925 "trtype": "TCP", 00:19:24.925 "adrfam": "IPv4", 00:19:24.925 "traddr": "10.0.0.1", 00:19:24.925 "trsvcid": "56052" 00:19:24.925 }, 00:19:24.925 "auth": { 00:19:24.925 "state": "completed", 00:19:24.925 "digest": "sha384", 00:19:24.925 "dhgroup": "ffdhe3072" 00:19:24.925 } 00:19:24.925 } 00:19:24.925 ]' 00:19:24.925 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.925 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.925 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.925 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:24.925 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.925 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.925 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.925 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.187 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:25.187 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:25.758 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.758 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.758 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.758 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.758 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.758 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.758 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:25.758 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:26.020 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:26.020 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.020 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:26.020 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:26.020 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:26.020 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.020 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.020 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.020 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.020 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.020 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.020 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.020 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.281 00:19:26.281 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.281 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.281 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.542 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.542 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.542 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.542 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.542 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.542 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.542 { 00:19:26.542 "cntlid": 67, 00:19:26.542 "qid": 0, 00:19:26.542 "state": "enabled", 00:19:26.542 "thread": "nvmf_tgt_poll_group_000", 00:19:26.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:26.542 "listen_address": { 00:19:26.542 "trtype": "TCP", 00:19:26.542 "adrfam": "IPv4", 00:19:26.542 "traddr": "10.0.0.2", 00:19:26.542 "trsvcid": "4420" 00:19:26.542 }, 00:19:26.542 "peer_address": { 00:19:26.542 "trtype": "TCP", 00:19:26.542 "adrfam": "IPv4", 00:19:26.542 "traddr": "10.0.0.1", 00:19:26.542 "trsvcid": "56088" 00:19:26.542 }, 00:19:26.542 "auth": { 00:19:26.542 "state": "completed", 00:19:26.542 "digest": "sha384", 00:19:26.543 "dhgroup": "ffdhe3072" 00:19:26.543 } 00:19:26.543 } 00:19:26.543 ]' 00:19:26.543 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.543 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:26.543 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.543 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:26.543 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.543 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.543 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.543 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.804 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:19:26.804 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:19:27.376 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.376 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.376 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.376 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.376 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.376 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.376 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:27.376 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:27.637 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:27.637 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.637 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:27.637 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:27.637 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:27.637 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.637 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.637 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.637 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.637 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.637 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.637 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.637 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.898 00:19:27.898 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.898 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.898 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.159 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.159 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.159 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.159 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.159 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.159 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.159 { 00:19:28.159 "cntlid": 69, 00:19:28.159 "qid": 0, 00:19:28.159 "state": "enabled", 00:19:28.159 "thread": "nvmf_tgt_poll_group_000", 00:19:28.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:28.159 "listen_address": { 00:19:28.159 "trtype": "TCP", 00:19:28.159 "adrfam": "IPv4", 00:19:28.159 "traddr": "10.0.0.2", 00:19:28.159 "trsvcid": "4420" 00:19:28.159 }, 00:19:28.159 "peer_address": { 00:19:28.159 "trtype": "TCP", 00:19:28.159 "adrfam": "IPv4", 00:19:28.159 "traddr": "10.0.0.1", 00:19:28.159 "trsvcid": "56118" 00:19:28.159 }, 00:19:28.159 "auth": { 00:19:28.159 "state": "completed", 00:19:28.159 "digest": "sha384", 00:19:28.159 "dhgroup": "ffdhe3072" 00:19:28.159 } 00:19:28.159 } 00:19:28.159 ]' 00:19:28.159 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.159 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:28.159 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.159 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:28.159 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.159 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.159 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.159 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.420 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:28.420 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:28.992 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.253 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.513 00:19:29.513 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.513 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.514 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.774 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.774 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.774 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.774 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.774 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.774 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.774 { 00:19:29.774 "cntlid": 71, 00:19:29.774 "qid": 0, 00:19:29.774 "state": "enabled", 00:19:29.774 "thread": "nvmf_tgt_poll_group_000", 00:19:29.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:29.774 "listen_address": { 00:19:29.775 "trtype": "TCP", 00:19:29.775 "adrfam": "IPv4", 00:19:29.775 "traddr": "10.0.0.2", 00:19:29.775 "trsvcid": "4420" 00:19:29.775 }, 00:19:29.775 "peer_address": { 00:19:29.775 "trtype": "TCP", 00:19:29.775 "adrfam": "IPv4", 00:19:29.775 "traddr": "10.0.0.1", 00:19:29.775 "trsvcid": "39956" 00:19:29.775 }, 00:19:29.775 "auth": { 00:19:29.775 "state": "completed", 00:19:29.775 "digest": "sha384", 00:19:29.775 "dhgroup": "ffdhe3072" 00:19:29.775 } 00:19:29.775 } 00:19:29.775 ]' 00:19:29.775 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.775 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.775 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.775 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:29.775 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.775 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.775 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.775 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.035 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:30.035 09:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:30.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:30.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:30.606 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:30.868 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:30.868 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.868 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:30.868 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:30.868 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:30.868 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.868 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.868 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.868 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.868 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.868 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.868 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.868 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.128 00:19:31.128 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.128 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.128 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.388 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.388 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.388 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.388 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.389 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.389 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.389 { 00:19:31.389 "cntlid": 73, 00:19:31.389 "qid": 0, 00:19:31.389 "state": "enabled", 00:19:31.389 "thread": "nvmf_tgt_poll_group_000", 00:19:31.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:31.389 "listen_address": { 00:19:31.389 "trtype": "TCP", 00:19:31.389 "adrfam": "IPv4", 00:19:31.389 "traddr": "10.0.0.2", 00:19:31.389 "trsvcid": "4420" 00:19:31.389 }, 00:19:31.389 "peer_address": { 00:19:31.389 "trtype": "TCP", 00:19:31.389 "adrfam": "IPv4", 00:19:31.389 "traddr": "10.0.0.1", 00:19:31.389 "trsvcid": "39976" 00:19:31.389 }, 00:19:31.389 "auth": { 00:19:31.389 "state": "completed", 00:19:31.389 "digest": "sha384", 00:19:31.389 "dhgroup": "ffdhe4096" 00:19:31.389 } 00:19:31.389 } 00:19:31.389 ]' 00:19:31.389 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.389 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.389 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.389 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.389 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.389 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.389 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.389 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.650 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:31.650 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:32.221 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.221 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.221 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.221 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.221 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.221 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.221 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:32.221 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:32.482 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:32.482 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.482 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:32.482 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:32.482 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:32.482 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.482 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.482 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.482 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.482 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.482 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.482 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.482 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.743 00:19:32.743 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.743 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.743 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.004 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.004 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.004 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.004 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.004 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.004 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.004 { 00:19:33.004 "cntlid": 75, 00:19:33.004 "qid": 0, 00:19:33.004 "state": "enabled", 00:19:33.004 "thread": "nvmf_tgt_poll_group_000", 00:19:33.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:33.004 "listen_address": { 00:19:33.004 "trtype": "TCP", 00:19:33.004 "adrfam": "IPv4", 00:19:33.004 "traddr": "10.0.0.2", 00:19:33.004 "trsvcid": "4420" 00:19:33.004 }, 00:19:33.004 "peer_address": { 00:19:33.004 "trtype": "TCP", 00:19:33.004 "adrfam": "IPv4", 00:19:33.004 "traddr": "10.0.0.1", 00:19:33.004 "trsvcid": "39994" 00:19:33.004 }, 00:19:33.004 "auth": { 00:19:33.004 "state": "completed", 00:19:33.004 "digest": "sha384", 00:19:33.004 "dhgroup": "ffdhe4096" 00:19:33.004 } 00:19:33.004 } 00:19:33.004 ]' 00:19:33.004 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.004 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.004 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.004 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:33.004 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.004 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.004 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.004 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.265 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:19:33.265 09:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:19:33.836 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.836 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:33.836 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.836 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.836 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.836 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.836 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:33.836 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:34.099 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:34.100 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.100 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:34.100 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:34.100 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:34.100 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.100 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.100 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.100 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.100 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.100 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.100 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.100 09:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.362 00:19:34.362 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.362 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.362 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.623 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.623 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.623 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.623 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.623 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.623 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.623 { 00:19:34.623 "cntlid": 77, 00:19:34.623 "qid": 0, 00:19:34.623 "state": "enabled", 00:19:34.623 "thread": "nvmf_tgt_poll_group_000", 00:19:34.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:34.623 "listen_address": { 00:19:34.623 "trtype": "TCP", 00:19:34.623 "adrfam": "IPv4", 00:19:34.623 "traddr": "10.0.0.2", 00:19:34.623 "trsvcid": "4420" 00:19:34.623 }, 00:19:34.623 "peer_address": { 00:19:34.623 "trtype": "TCP", 00:19:34.623 "adrfam": "IPv4", 00:19:34.623 "traddr": "10.0.0.1", 00:19:34.623 "trsvcid": "40014" 00:19:34.623 }, 00:19:34.623 "auth": { 00:19:34.623 "state": "completed", 00:19:34.623 "digest": "sha384", 00:19:34.623 "dhgroup": "ffdhe4096" 00:19:34.623 } 00:19:34.623 } 00:19:34.623 ]' 00:19:34.623 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.623 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.623 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.623 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:34.623 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.623 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.623 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.623 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.884 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:34.884 09:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:35.455 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:35.715 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:35.975 00:19:35.975 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.975 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.975 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.236 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.236 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.236 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.236 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.236 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.236 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.236 { 00:19:36.236 "cntlid": 79, 00:19:36.236 "qid": 0, 00:19:36.236 "state": "enabled", 00:19:36.236 "thread": "nvmf_tgt_poll_group_000", 00:19:36.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:36.236 "listen_address": { 00:19:36.236 "trtype": "TCP", 00:19:36.236 "adrfam": "IPv4", 00:19:36.236 "traddr": "10.0.0.2", 00:19:36.236 "trsvcid": "4420" 00:19:36.236 }, 00:19:36.236 "peer_address": { 00:19:36.236 "trtype": "TCP", 00:19:36.236 "adrfam": "IPv4", 00:19:36.236 "traddr": "10.0.0.1", 00:19:36.236 "trsvcid": "40038" 00:19:36.236 }, 00:19:36.236 "auth": { 00:19:36.236 "state": "completed", 00:19:36.236 "digest": "sha384", 00:19:36.236 "dhgroup": "ffdhe4096" 00:19:36.236 } 00:19:36.236 } 00:19:36.236 ]' 00:19:36.236 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.236 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.236 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.236 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:36.236 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.497 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.497 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.497 09:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.497 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:36.497 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:37.068 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.329 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.329 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.329 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.329 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.329 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.329 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.329 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:37.329 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:37.329 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:37.329 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.329 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:37.329 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:37.329 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:37.329 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.329 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.329 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.329 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.329 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.329 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.329 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.329 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.901 00:19:37.901 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.901 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.901 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.901 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.901 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.901 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.901 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.901 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.901 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.901 { 00:19:37.901 "cntlid": 81, 00:19:37.901 "qid": 0, 00:19:37.901 "state": "enabled", 00:19:37.901 "thread": "nvmf_tgt_poll_group_000", 00:19:37.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:37.901 "listen_address": { 00:19:37.901 "trtype": "TCP", 00:19:37.901 "adrfam": "IPv4", 00:19:37.901 "traddr": "10.0.0.2", 00:19:37.901 "trsvcid": "4420" 00:19:37.901 }, 00:19:37.901 "peer_address": { 00:19:37.901 "trtype": "TCP", 00:19:37.901 "adrfam": "IPv4", 00:19:37.901 "traddr": "10.0.0.1", 00:19:37.901 "trsvcid": "40064" 00:19:37.901 }, 00:19:37.901 "auth": { 00:19:37.901 "state": "completed", 00:19:37.901 "digest": "sha384", 00:19:37.901 "dhgroup": "ffdhe6144" 00:19:37.901 } 00:19:37.901 } 00:19:37.901 ]' 00:19:37.901 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.901 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.901 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.161 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.161 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.161 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.161 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.161 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.161 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:38.161 09:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.104 09:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.365 00:19:39.365 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.365 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.365 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.626 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.626 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.626 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.626 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.626 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.626 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.626 { 00:19:39.626 "cntlid": 83, 00:19:39.626 "qid": 0, 00:19:39.626 "state": "enabled", 00:19:39.626 "thread": "nvmf_tgt_poll_group_000", 00:19:39.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:39.626 "listen_address": { 00:19:39.626 "trtype": "TCP", 00:19:39.626 "adrfam": "IPv4", 00:19:39.626 "traddr": "10.0.0.2", 00:19:39.626 "trsvcid": "4420" 00:19:39.626 }, 00:19:39.626 "peer_address": { 00:19:39.626 "trtype": "TCP", 00:19:39.626 "adrfam": "IPv4", 00:19:39.626 "traddr": "10.0.0.1", 00:19:39.626 "trsvcid": "42126" 00:19:39.626 }, 00:19:39.626 "auth": { 00:19:39.626 "state": "completed", 00:19:39.626 "digest": "sha384", 00:19:39.626 "dhgroup": "ffdhe6144" 00:19:39.626 } 00:19:39.626 } 00:19:39.626 ]' 00:19:39.626 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.626 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.626 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.626 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:39.626 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.887 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.887 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.887 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.888 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:19:39.888 09:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.831 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.092 00:19:41.092 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.092 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.092 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.352 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.352 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.352 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.352 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.352 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.352 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.352 { 00:19:41.352 "cntlid": 85, 00:19:41.352 "qid": 0, 00:19:41.352 "state": "enabled", 00:19:41.352 "thread": "nvmf_tgt_poll_group_000", 00:19:41.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:41.352 "listen_address": { 00:19:41.352 "trtype": "TCP", 00:19:41.352 "adrfam": "IPv4", 00:19:41.352 "traddr": "10.0.0.2", 00:19:41.352 "trsvcid": "4420" 00:19:41.352 }, 00:19:41.352 "peer_address": { 00:19:41.352 "trtype": "TCP", 00:19:41.352 "adrfam": "IPv4", 00:19:41.352 "traddr": "10.0.0.1", 00:19:41.352 "trsvcid": "42160" 00:19:41.352 }, 00:19:41.352 "auth": { 00:19:41.352 "state": "completed", 00:19:41.352 "digest": "sha384", 00:19:41.352 "dhgroup": "ffdhe6144" 00:19:41.352 } 00:19:41.352 } 00:19:41.352 ]' 00:19:41.352 09:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.352 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.352 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.352 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:41.352 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.613 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.613 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.613 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.613 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:41.613 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:42.553 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.553 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:42.553 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.553 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.553 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.553 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.553 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:42.553 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:42.553 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:42.553 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.553 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:42.553 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:42.553 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:42.553 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.554 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:42.554 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.554 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.554 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.554 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:42.554 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:42.554 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:42.815 00:19:42.815 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.815 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.815 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.076 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.076 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.076 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.076 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.076 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.076 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.076 { 00:19:43.076 "cntlid": 87, 00:19:43.076 "qid": 0, 00:19:43.076 "state": "enabled", 00:19:43.076 "thread": "nvmf_tgt_poll_group_000", 00:19:43.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:43.076 "listen_address": { 00:19:43.076 "trtype": "TCP", 00:19:43.076 "adrfam": "IPv4", 00:19:43.076 "traddr": "10.0.0.2", 00:19:43.076 "trsvcid": "4420" 00:19:43.076 }, 00:19:43.076 "peer_address": { 00:19:43.076 "trtype": "TCP", 00:19:43.076 "adrfam": "IPv4", 00:19:43.076 "traddr": "10.0.0.1", 00:19:43.076 "trsvcid": "42194" 00:19:43.076 }, 00:19:43.076 "auth": { 00:19:43.076 "state": "completed", 00:19:43.076 "digest": "sha384", 00:19:43.076 "dhgroup": "ffdhe6144" 00:19:43.076 } 00:19:43.076 } 00:19:43.076 ]' 00:19:43.076 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.076 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.076 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.076 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:43.076 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.337 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.337 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.337 09:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.337 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:43.337 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.283 09:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.855 00:19:44.855 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.855 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.855 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.855 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.855 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.855 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.855 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.855 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.855 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.855 { 00:19:44.855 "cntlid": 89, 00:19:44.855 "qid": 0, 00:19:44.855 "state": "enabled", 00:19:44.855 "thread": "nvmf_tgt_poll_group_000", 00:19:44.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:44.855 "listen_address": { 00:19:44.855 "trtype": "TCP", 00:19:44.855 "adrfam": "IPv4", 00:19:44.855 "traddr": "10.0.0.2", 00:19:44.855 "trsvcid": "4420" 00:19:44.855 }, 00:19:44.855 "peer_address": { 00:19:44.855 "trtype": "TCP", 00:19:44.855 "adrfam": "IPv4", 00:19:44.855 "traddr": "10.0.0.1", 00:19:44.855 "trsvcid": "42214" 00:19:44.855 }, 00:19:44.855 "auth": { 00:19:44.855 "state": "completed", 00:19:44.855 "digest": "sha384", 00:19:44.855 "dhgroup": "ffdhe8192" 00:19:44.855 } 00:19:44.855 } 00:19:44.855 ]' 00:19:44.855 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.855 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.855 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.116 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.116 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.116 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.116 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.116 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.379 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:45.379 09:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:45.951 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.951 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.951 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.951 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.951 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.951 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.951 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:45.951 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:46.213 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:46.213 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.213 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:46.213 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:46.213 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:46.213 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.213 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.213 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.213 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.213 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.213 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.213 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.213 09:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.474 00:19:46.474 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.474 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.474 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.737 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.737 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.737 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.737 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.737 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.737 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.737 { 00:19:46.737 "cntlid": 91, 00:19:46.737 "qid": 0, 00:19:46.737 "state": "enabled", 00:19:46.737 "thread": "nvmf_tgt_poll_group_000", 00:19:46.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:46.737 "listen_address": { 00:19:46.737 "trtype": "TCP", 00:19:46.737 "adrfam": "IPv4", 00:19:46.737 "traddr": "10.0.0.2", 00:19:46.737 "trsvcid": "4420" 00:19:46.737 }, 00:19:46.737 "peer_address": { 00:19:46.737 "trtype": "TCP", 00:19:46.737 "adrfam": "IPv4", 00:19:46.737 "traddr": "10.0.0.1", 00:19:46.737 "trsvcid": "42240" 00:19:46.737 }, 00:19:46.737 "auth": { 00:19:46.737 "state": "completed", 00:19:46.737 "digest": "sha384", 00:19:46.737 "dhgroup": "ffdhe8192" 00:19:46.737 } 00:19:46.737 } 00:19:46.737 ]' 00:19:46.737 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.737 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.737 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.737 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.737 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.999 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.999 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.999 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.999 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:19:46.999 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.944 09:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.517 00:19:48.517 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.517 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.517 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.517 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.517 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.517 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.517 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.517 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.517 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.517 { 00:19:48.517 "cntlid": 93, 00:19:48.517 "qid": 0, 00:19:48.517 "state": "enabled", 00:19:48.517 "thread": "nvmf_tgt_poll_group_000", 00:19:48.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:48.517 "listen_address": { 00:19:48.517 "trtype": "TCP", 00:19:48.517 "adrfam": "IPv4", 00:19:48.517 "traddr": "10.0.0.2", 00:19:48.517 "trsvcid": "4420" 00:19:48.517 }, 00:19:48.517 "peer_address": { 00:19:48.517 "trtype": "TCP", 00:19:48.517 "adrfam": "IPv4", 00:19:48.517 "traddr": "10.0.0.1", 00:19:48.517 "trsvcid": "37248" 00:19:48.517 }, 00:19:48.517 "auth": { 00:19:48.517 "state": "completed", 00:19:48.517 "digest": "sha384", 00:19:48.517 "dhgroup": "ffdhe8192" 00:19:48.517 } 00:19:48.517 } 00:19:48.517 ]' 00:19:48.517 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.778 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.778 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.778 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:48.778 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.778 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.778 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.778 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.039 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:49.039 09:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:49.611 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.611 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.611 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.611 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.611 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.611 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.611 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:49.611 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:49.611 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:49.611 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.611 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:49.611 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:49.611 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:49.611 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.611 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:49.873 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.873 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.873 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.873 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:49.873 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.873 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.134 00:19:50.134 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.134 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.134 09:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.396 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.396 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.396 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.396 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.396 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.396 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.396 { 00:19:50.396 "cntlid": 95, 00:19:50.396 "qid": 0, 00:19:50.396 "state": "enabled", 00:19:50.396 "thread": "nvmf_tgt_poll_group_000", 00:19:50.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:50.396 "listen_address": { 00:19:50.396 "trtype": "TCP", 00:19:50.396 "adrfam": "IPv4", 00:19:50.396 "traddr": "10.0.0.2", 00:19:50.396 "trsvcid": "4420" 00:19:50.396 }, 00:19:50.396 "peer_address": { 00:19:50.396 "trtype": "TCP", 00:19:50.396 "adrfam": "IPv4", 00:19:50.396 "traddr": "10.0.0.1", 00:19:50.396 "trsvcid": "37282" 00:19:50.396 }, 00:19:50.396 "auth": { 00:19:50.396 "state": "completed", 00:19:50.396 "digest": "sha384", 00:19:50.396 "dhgroup": "ffdhe8192" 00:19:50.396 } 00:19:50.396 } 00:19:50.396 ]' 00:19:50.396 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.396 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.396 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.396 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:50.396 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.657 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.657 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.657 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.657 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:50.657 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:51.227 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.227 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.227 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.227 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.227 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.227 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:51.227 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.227 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.227 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:51.227 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:51.488 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:51.488 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.488 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:51.488 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:51.488 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:51.488 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.488 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.488 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.488 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.488 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.488 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.488 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.488 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.755 00:19:51.755 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.755 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.755 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.016 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.016 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.016 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.016 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.016 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.016 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.016 { 00:19:52.016 "cntlid": 97, 00:19:52.016 "qid": 0, 00:19:52.016 "state": "enabled", 00:19:52.016 "thread": "nvmf_tgt_poll_group_000", 00:19:52.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:52.016 "listen_address": { 00:19:52.016 "trtype": "TCP", 00:19:52.016 "adrfam": "IPv4", 00:19:52.016 "traddr": "10.0.0.2", 00:19:52.016 "trsvcid": "4420" 00:19:52.016 }, 00:19:52.016 "peer_address": { 00:19:52.016 "trtype": "TCP", 00:19:52.016 "adrfam": "IPv4", 00:19:52.016 "traddr": "10.0.0.1", 00:19:52.016 "trsvcid": "37298" 00:19:52.016 }, 00:19:52.016 "auth": { 00:19:52.016 "state": "completed", 00:19:52.016 "digest": "sha512", 00:19:52.016 "dhgroup": "null" 00:19:52.016 } 00:19:52.016 } 00:19:52.016 ]' 00:19:52.016 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.016 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.016 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.016 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:52.016 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.016 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.016 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.016 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.276 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:52.276 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:52.848 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.848 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:52.848 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.848 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.848 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.848 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.848 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:52.848 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:53.109 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:53.109 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.110 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:53.110 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:53.110 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:53.110 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.110 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.110 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.110 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.110 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.110 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.110 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.110 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.371 00:19:53.371 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.371 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.371 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.632 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.632 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.632 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.632 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.632 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.632 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.632 { 00:19:53.632 "cntlid": 99, 00:19:53.632 "qid": 0, 00:19:53.632 "state": "enabled", 00:19:53.632 "thread": "nvmf_tgt_poll_group_000", 00:19:53.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:53.632 "listen_address": { 00:19:53.632 "trtype": "TCP", 00:19:53.632 "adrfam": "IPv4", 00:19:53.632 "traddr": "10.0.0.2", 00:19:53.632 "trsvcid": "4420" 00:19:53.632 }, 00:19:53.632 "peer_address": { 00:19:53.632 "trtype": "TCP", 00:19:53.632 "adrfam": "IPv4", 00:19:53.632 "traddr": "10.0.0.1", 00:19:53.632 "trsvcid": "37308" 00:19:53.632 }, 00:19:53.632 "auth": { 00:19:53.632 "state": "completed", 00:19:53.632 "digest": "sha512", 00:19:53.632 "dhgroup": "null" 00:19:53.632 } 00:19:53.632 } 00:19:53.632 ]' 00:19:53.632 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.632 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.632 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.632 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:53.632 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.632 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.632 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.632 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.893 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:19:53.893 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:19:54.600 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.600 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.600 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.600 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.600 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.600 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.600 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:54.600 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:54.862 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:54.862 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.862 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:54.862 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:54.862 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:54.862 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.862 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.862 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.862 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.862 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.862 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.862 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.862 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.862 00:19:54.862 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.862 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.862 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.124 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.124 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.124 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.124 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.125 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.125 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.125 { 00:19:55.125 "cntlid": 101, 00:19:55.125 "qid": 0, 00:19:55.125 "state": "enabled", 00:19:55.125 "thread": "nvmf_tgt_poll_group_000", 00:19:55.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:55.125 "listen_address": { 00:19:55.125 "trtype": "TCP", 00:19:55.125 "adrfam": "IPv4", 00:19:55.125 "traddr": "10.0.0.2", 00:19:55.125 "trsvcid": "4420" 00:19:55.125 }, 00:19:55.125 "peer_address": { 00:19:55.125 "trtype": "TCP", 00:19:55.125 "adrfam": "IPv4", 00:19:55.125 "traddr": "10.0.0.1", 00:19:55.125 "trsvcid": "37326" 00:19:55.125 }, 00:19:55.125 "auth": { 00:19:55.125 "state": "completed", 00:19:55.125 "digest": "sha512", 00:19:55.125 "dhgroup": "null" 00:19:55.125 } 00:19:55.125 } 00:19:55.125 ]' 00:19:55.125 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.125 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:55.125 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.388 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:55.388 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.388 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.388 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.388 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.388 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:55.388 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:56.332 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.333 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.594 00:19:56.594 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.594 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.594 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.856 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.856 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.856 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.856 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.856 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.856 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.856 { 00:19:56.856 "cntlid": 103, 00:19:56.856 "qid": 0, 00:19:56.856 "state": "enabled", 00:19:56.856 "thread": "nvmf_tgt_poll_group_000", 00:19:56.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:56.856 "listen_address": { 00:19:56.856 "trtype": "TCP", 00:19:56.856 "adrfam": "IPv4", 00:19:56.856 "traddr": "10.0.0.2", 00:19:56.856 "trsvcid": "4420" 00:19:56.856 }, 00:19:56.856 "peer_address": { 00:19:56.856 "trtype": "TCP", 00:19:56.856 "adrfam": "IPv4", 00:19:56.856 "traddr": "10.0.0.1", 00:19:56.856 "trsvcid": "37346" 00:19:56.856 }, 00:19:56.856 "auth": { 00:19:56.856 "state": "completed", 00:19:56.856 "digest": "sha512", 00:19:56.856 "dhgroup": "null" 00:19:56.856 } 00:19:56.856 } 00:19:56.856 ]' 00:19:56.856 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.856 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.856 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.856 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:56.856 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.856 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.856 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.856 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.119 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:57.119 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:19:57.692 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.692 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:57.692 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.692 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.692 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.692 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.692 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.692 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:57.692 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:57.958 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:57.958 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.958 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:57.958 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:57.958 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:57.958 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.958 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.958 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.958 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.958 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.958 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.958 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.958 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.242 00:19:58.242 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.242 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.242 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.242 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.242 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.242 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.242 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.530 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.530 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.530 { 00:19:58.530 "cntlid": 105, 00:19:58.530 "qid": 0, 00:19:58.530 "state": "enabled", 00:19:58.530 "thread": "nvmf_tgt_poll_group_000", 00:19:58.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:58.530 "listen_address": { 00:19:58.530 "trtype": "TCP", 00:19:58.530 "adrfam": "IPv4", 00:19:58.530 "traddr": "10.0.0.2", 00:19:58.530 "trsvcid": "4420" 00:19:58.530 }, 00:19:58.530 "peer_address": { 00:19:58.530 "trtype": "TCP", 00:19:58.530 "adrfam": "IPv4", 00:19:58.530 "traddr": "10.0.0.1", 00:19:58.530 "trsvcid": "58956" 00:19:58.530 }, 00:19:58.530 "auth": { 00:19:58.530 "state": "completed", 00:19:58.530 "digest": "sha512", 00:19:58.530 "dhgroup": "ffdhe2048" 00:19:58.530 } 00:19:58.530 } 00:19:58.530 ]' 00:19:58.530 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.530 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.530 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.530 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.530 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.530 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.530 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.530 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.803 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:58.803 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:19:59.382 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.382 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:59.382 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.382 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.382 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.382 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.382 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:59.382 09:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:59.644 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:59.644 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.644 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:59.644 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:59.644 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:59.644 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.644 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.644 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.644 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.644 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.644 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.644 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.645 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.645 00:19:59.910 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.910 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.910 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.910 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.910 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.910 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.910 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.910 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.910 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.910 { 00:19:59.910 "cntlid": 107, 00:19:59.910 "qid": 0, 00:19:59.910 "state": "enabled", 00:19:59.910 "thread": "nvmf_tgt_poll_group_000", 00:19:59.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:59.910 "listen_address": { 00:19:59.910 "trtype": "TCP", 00:19:59.910 "adrfam": "IPv4", 00:19:59.910 "traddr": "10.0.0.2", 00:19:59.910 "trsvcid": "4420" 00:19:59.910 }, 00:19:59.910 "peer_address": { 00:19:59.910 "trtype": "TCP", 00:19:59.910 "adrfam": "IPv4", 00:19:59.910 "traddr": "10.0.0.1", 00:19:59.910 "trsvcid": "58982" 00:19:59.910 }, 00:19:59.910 "auth": { 00:19:59.910 "state": "completed", 00:19:59.910 "digest": "sha512", 00:19:59.910 "dhgroup": "ffdhe2048" 00:19:59.910 } 00:19:59.910 } 00:19:59.910 ]' 00:19:59.910 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.910 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.910 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.175 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.175 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.175 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.175 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.175 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.175 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:20:00.175 09:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.128 09:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.398 00:20:01.398 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.398 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.398 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.668 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.668 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.668 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.668 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.668 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.668 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.668 { 00:20:01.668 "cntlid": 109, 00:20:01.668 "qid": 0, 00:20:01.668 "state": "enabled", 00:20:01.668 "thread": "nvmf_tgt_poll_group_000", 00:20:01.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:01.668 "listen_address": { 00:20:01.668 "trtype": "TCP", 00:20:01.668 "adrfam": "IPv4", 00:20:01.668 "traddr": "10.0.0.2", 00:20:01.668 "trsvcid": "4420" 00:20:01.668 }, 00:20:01.668 "peer_address": { 00:20:01.668 "trtype": "TCP", 00:20:01.668 "adrfam": "IPv4", 00:20:01.668 "traddr": "10.0.0.1", 00:20:01.668 "trsvcid": "59020" 00:20:01.668 }, 00:20:01.668 "auth": { 00:20:01.668 "state": "completed", 00:20:01.668 "digest": "sha512", 00:20:01.668 "dhgroup": "ffdhe2048" 00:20:01.668 } 00:20:01.668 } 00:20:01.668 ]' 00:20:01.668 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.668 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.668 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.668 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:01.668 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.668 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.668 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.668 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.940 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:20:01.940 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:20:02.575 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.575 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:02.575 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.575 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.575 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.575 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.575 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:02.575 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:02.846 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:02.846 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.846 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:02.846 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:02.846 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:02.846 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.847 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:02.847 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.847 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.847 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.847 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:02.847 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.847 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.847 00:20:03.134 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.134 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.134 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.134 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.134 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.134 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.134 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.134 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.134 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.134 { 00:20:03.134 "cntlid": 111, 00:20:03.134 "qid": 0, 00:20:03.134 "state": "enabled", 00:20:03.134 "thread": "nvmf_tgt_poll_group_000", 00:20:03.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:03.134 "listen_address": { 00:20:03.134 "trtype": "TCP", 00:20:03.134 "adrfam": "IPv4", 00:20:03.134 "traddr": "10.0.0.2", 00:20:03.134 "trsvcid": "4420" 00:20:03.134 }, 00:20:03.134 "peer_address": { 00:20:03.134 "trtype": "TCP", 00:20:03.134 "adrfam": "IPv4", 00:20:03.134 "traddr": "10.0.0.1", 00:20:03.134 "trsvcid": "59046" 00:20:03.134 }, 00:20:03.134 "auth": { 00:20:03.134 "state": "completed", 00:20:03.134 "digest": "sha512", 00:20:03.134 "dhgroup": "ffdhe2048" 00:20:03.134 } 00:20:03.134 } 00:20:03.134 ]' 00:20:03.134 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.134 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.134 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.411 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.411 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.411 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.411 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.411 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.411 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:20:03.411 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:20:04.035 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.035 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:04.035 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.035 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.035 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.035 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.035 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.035 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:04.035 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:04.296 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:04.296 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.296 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:04.296 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:04.296 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:04.296 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.296 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.296 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.296 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.296 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.296 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.296 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.296 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.558 00:20:04.558 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.558 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.558 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.821 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.821 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.821 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.821 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.821 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.821 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.821 { 00:20:04.821 "cntlid": 113, 00:20:04.821 "qid": 0, 00:20:04.821 "state": "enabled", 00:20:04.821 "thread": "nvmf_tgt_poll_group_000", 00:20:04.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:04.821 "listen_address": { 00:20:04.821 "trtype": "TCP", 00:20:04.821 "adrfam": "IPv4", 00:20:04.821 "traddr": "10.0.0.2", 00:20:04.821 "trsvcid": "4420" 00:20:04.821 }, 00:20:04.821 "peer_address": { 00:20:04.821 "trtype": "TCP", 00:20:04.821 "adrfam": "IPv4", 00:20:04.821 "traddr": "10.0.0.1", 00:20:04.821 "trsvcid": "59060" 00:20:04.821 }, 00:20:04.821 "auth": { 00:20:04.821 "state": "completed", 00:20:04.821 "digest": "sha512", 00:20:04.821 "dhgroup": "ffdhe3072" 00:20:04.821 } 00:20:04.821 } 00:20:04.821 ]' 00:20:04.821 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.821 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.821 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.821 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:04.821 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.821 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.821 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.821 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.084 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:20:05.084 09:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:20:05.661 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.924 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.188 00:20:06.188 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.188 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.188 09:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.454 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.454 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.454 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.454 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.454 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.454 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.454 { 00:20:06.454 "cntlid": 115, 00:20:06.454 "qid": 0, 00:20:06.454 "state": "enabled", 00:20:06.454 "thread": "nvmf_tgt_poll_group_000", 00:20:06.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:06.454 "listen_address": { 00:20:06.454 "trtype": "TCP", 00:20:06.454 "adrfam": "IPv4", 00:20:06.454 "traddr": "10.0.0.2", 00:20:06.454 "trsvcid": "4420" 00:20:06.454 }, 00:20:06.454 "peer_address": { 00:20:06.454 "trtype": "TCP", 00:20:06.454 "adrfam": "IPv4", 00:20:06.454 "traddr": "10.0.0.1", 00:20:06.454 "trsvcid": "59086" 00:20:06.454 }, 00:20:06.454 "auth": { 00:20:06.455 "state": "completed", 00:20:06.455 "digest": "sha512", 00:20:06.455 "dhgroup": "ffdhe3072" 00:20:06.455 } 00:20:06.455 } 00:20:06.455 ]' 00:20:06.455 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.455 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:06.455 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.455 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:06.455 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.721 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.721 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.721 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.721 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:20:06.721 09:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.671 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.938 00:20:07.938 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.938 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.938 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.209 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.209 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.209 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.209 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.209 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.209 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.209 { 00:20:08.209 "cntlid": 117, 00:20:08.209 "qid": 0, 00:20:08.209 "state": "enabled", 00:20:08.209 "thread": "nvmf_tgt_poll_group_000", 00:20:08.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:08.209 "listen_address": { 00:20:08.209 "trtype": "TCP", 00:20:08.209 "adrfam": "IPv4", 00:20:08.209 "traddr": "10.0.0.2", 00:20:08.209 "trsvcid": "4420" 00:20:08.209 }, 00:20:08.209 "peer_address": { 00:20:08.209 "trtype": "TCP", 00:20:08.209 "adrfam": "IPv4", 00:20:08.209 "traddr": "10.0.0.1", 00:20:08.209 "trsvcid": "59106" 00:20:08.209 }, 00:20:08.209 "auth": { 00:20:08.209 "state": "completed", 00:20:08.209 "digest": "sha512", 00:20:08.209 "dhgroup": "ffdhe3072" 00:20:08.209 } 00:20:08.209 } 00:20:08.209 ]' 00:20:08.209 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.209 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:08.209 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.209 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:08.210 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.210 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.210 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.210 09:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.484 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:20:08.484 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:20:09.079 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.079 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:09.079 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.079 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.079 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.079 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.079 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:09.079 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:09.356 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:09.356 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.356 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:09.356 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:09.356 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:09.356 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.356 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:09.356 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.356 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.356 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.356 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:09.356 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.356 09:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.631 00:20:09.631 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.631 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.631 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.631 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.632 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.632 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.632 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.632 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.632 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.632 { 00:20:09.632 "cntlid": 119, 00:20:09.632 "qid": 0, 00:20:09.632 "state": "enabled", 00:20:09.632 "thread": "nvmf_tgt_poll_group_000", 00:20:09.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:09.632 "listen_address": { 00:20:09.632 "trtype": "TCP", 00:20:09.632 "adrfam": "IPv4", 00:20:09.632 "traddr": "10.0.0.2", 00:20:09.632 "trsvcid": "4420" 00:20:09.632 }, 00:20:09.632 "peer_address": { 00:20:09.632 "trtype": "TCP", 00:20:09.632 "adrfam": "IPv4", 00:20:09.632 "traddr": "10.0.0.1", 00:20:09.632 "trsvcid": "37880" 00:20:09.632 }, 00:20:09.632 "auth": { 00:20:09.632 "state": "completed", 00:20:09.632 "digest": "sha512", 00:20:09.632 "dhgroup": "ffdhe3072" 00:20:09.632 } 00:20:09.632 } 00:20:09.632 ]' 00:20:09.632 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.907 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.907 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.907 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:09.907 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.907 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.907 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.907 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.194 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:20:10.194 09:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.776 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.042 00:20:11.042 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.042 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.042 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.313 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.313 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.313 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.313 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.313 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.313 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.313 { 00:20:11.313 "cntlid": 121, 00:20:11.313 "qid": 0, 00:20:11.313 "state": "enabled", 00:20:11.313 "thread": "nvmf_tgt_poll_group_000", 00:20:11.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:11.313 "listen_address": { 00:20:11.313 "trtype": "TCP", 00:20:11.313 "adrfam": "IPv4", 00:20:11.313 "traddr": "10.0.0.2", 00:20:11.313 "trsvcid": "4420" 00:20:11.313 }, 00:20:11.313 "peer_address": { 00:20:11.313 "trtype": "TCP", 00:20:11.313 "adrfam": "IPv4", 00:20:11.313 "traddr": "10.0.0.1", 00:20:11.313 "trsvcid": "37910" 00:20:11.313 }, 00:20:11.313 "auth": { 00:20:11.313 "state": "completed", 00:20:11.313 "digest": "sha512", 00:20:11.313 "dhgroup": "ffdhe4096" 00:20:11.313 } 00:20:11.313 } 00:20:11.313 ]' 00:20:11.313 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.313 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:11.313 09:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.313 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:11.313 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.589 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.589 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.589 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.589 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:20:11.589 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:20:12.190 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.190 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:12.190 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.190 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.190 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.190 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.190 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:12.190 09:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:12.464 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:12.464 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.464 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:12.464 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:12.464 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:12.464 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.464 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.464 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.464 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.464 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.464 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.464 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.464 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.729 00:20:12.729 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.729 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.729 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.007 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.007 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.008 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.008 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.008 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.008 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.008 { 00:20:13.008 "cntlid": 123, 00:20:13.008 "qid": 0, 00:20:13.008 "state": "enabled", 00:20:13.008 "thread": "nvmf_tgt_poll_group_000", 00:20:13.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:13.008 "listen_address": { 00:20:13.008 "trtype": "TCP", 00:20:13.008 "adrfam": "IPv4", 00:20:13.008 "traddr": "10.0.0.2", 00:20:13.008 "trsvcid": "4420" 00:20:13.008 }, 00:20:13.008 "peer_address": { 00:20:13.008 "trtype": "TCP", 00:20:13.008 "adrfam": "IPv4", 00:20:13.008 "traddr": "10.0.0.1", 00:20:13.008 "trsvcid": "37926" 00:20:13.008 }, 00:20:13.008 "auth": { 00:20:13.008 "state": "completed", 00:20:13.008 "digest": "sha512", 00:20:13.008 "dhgroup": "ffdhe4096" 00:20:13.008 } 00:20:13.008 } 00:20:13.008 ]' 00:20:13.008 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.008 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:13.008 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.008 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:13.008 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.008 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.008 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.008 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.299 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:20:13.299 09:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:20:13.909 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.909 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:13.909 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.909 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.909 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.909 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.909 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:13.909 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:14.179 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:14.179 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.179 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:14.179 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:14.179 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:14.179 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.179 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.179 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.179 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.179 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.179 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.179 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.179 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.444 00:20:14.444 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.444 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.444 09:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.444 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.444 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.444 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.444 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.444 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.444 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.444 { 00:20:14.444 "cntlid": 125, 00:20:14.444 "qid": 0, 00:20:14.444 "state": "enabled", 00:20:14.444 "thread": "nvmf_tgt_poll_group_000", 00:20:14.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:14.444 "listen_address": { 00:20:14.444 "trtype": "TCP", 00:20:14.444 "adrfam": "IPv4", 00:20:14.444 "traddr": "10.0.0.2", 00:20:14.444 "trsvcid": "4420" 00:20:14.444 }, 00:20:14.444 "peer_address": { 00:20:14.444 "trtype": "TCP", 00:20:14.444 "adrfam": "IPv4", 00:20:14.444 "traddr": "10.0.0.1", 00:20:14.444 "trsvcid": "37948" 00:20:14.444 }, 00:20:14.444 "auth": { 00:20:14.444 "state": "completed", 00:20:14.444 "digest": "sha512", 00:20:14.444 "dhgroup": "ffdhe4096" 00:20:14.444 } 00:20:14.444 } 00:20:14.444 ]' 00:20:14.444 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.713 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.713 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.713 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:14.713 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.713 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.713 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.713 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.982 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:20:14.982 09:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:20:15.568 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.568 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:15.568 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.568 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.568 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.568 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.568 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:15.568 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:15.839 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:15.839 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.839 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:15.839 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:15.840 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:15.840 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.840 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:15.840 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.840 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.840 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.840 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:15.840 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:15.840 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.111 00:20:16.111 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.111 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.112 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.112 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.112 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.112 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.112 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.112 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.112 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.112 { 00:20:16.112 "cntlid": 127, 00:20:16.112 "qid": 0, 00:20:16.112 "state": "enabled", 00:20:16.112 "thread": "nvmf_tgt_poll_group_000", 00:20:16.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:16.112 "listen_address": { 00:20:16.112 "trtype": "TCP", 00:20:16.112 "adrfam": "IPv4", 00:20:16.112 "traddr": "10.0.0.2", 00:20:16.112 "trsvcid": "4420" 00:20:16.112 }, 00:20:16.112 "peer_address": { 00:20:16.112 "trtype": "TCP", 00:20:16.112 "adrfam": "IPv4", 00:20:16.112 "traddr": "10.0.0.1", 00:20:16.112 "trsvcid": "37986" 00:20:16.112 }, 00:20:16.112 "auth": { 00:20:16.112 "state": "completed", 00:20:16.112 "digest": "sha512", 00:20:16.112 "dhgroup": "ffdhe4096" 00:20:16.112 } 00:20:16.112 } 00:20:16.112 ]' 00:20:16.112 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.112 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.112 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.391 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:16.391 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.391 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.391 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.391 09:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.391 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:20:16.391 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:20:17.376 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.376 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:17.376 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.376 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.376 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.376 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.376 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.376 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:17.376 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:17.376 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:20:17.376 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.376 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:17.376 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:17.377 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:17.377 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.377 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.377 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.377 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.377 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.377 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.377 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.377 09:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.659 00:20:17.659 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.659 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.659 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.955 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.955 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.955 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.955 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.955 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.955 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.955 { 00:20:17.955 "cntlid": 129, 00:20:17.955 "qid": 0, 00:20:17.955 "state": "enabled", 00:20:17.955 "thread": "nvmf_tgt_poll_group_000", 00:20:17.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:17.955 "listen_address": { 00:20:17.955 "trtype": "TCP", 00:20:17.955 "adrfam": "IPv4", 00:20:17.955 "traddr": "10.0.0.2", 00:20:17.955 "trsvcid": "4420" 00:20:17.955 }, 00:20:17.955 "peer_address": { 00:20:17.955 "trtype": "TCP", 00:20:17.955 "adrfam": "IPv4", 00:20:17.955 "traddr": "10.0.0.1", 00:20:17.955 "trsvcid": "38014" 00:20:17.955 }, 00:20:17.955 "auth": { 00:20:17.955 "state": "completed", 00:20:17.955 "digest": "sha512", 00:20:17.955 "dhgroup": "ffdhe6144" 00:20:17.955 } 00:20:17.955 } 00:20:17.955 ]' 00:20:17.955 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.955 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:17.955 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.955 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:17.955 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.955 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.955 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.955 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.245 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:20:18.245 09:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:20:18.839 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.839 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:18.839 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.839 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.839 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.839 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.839 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:18.839 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:19.111 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:20:19.111 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.111 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:19.111 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:19.111 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:19.111 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.111 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.111 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.111 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.111 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.111 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.111 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.111 09:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.385 00:20:19.385 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.385 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.385 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.675 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.675 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.675 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.675 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.675 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.675 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.675 { 00:20:19.675 "cntlid": 131, 00:20:19.675 "qid": 0, 00:20:19.675 "state": "enabled", 00:20:19.675 "thread": "nvmf_tgt_poll_group_000", 00:20:19.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:19.675 "listen_address": { 00:20:19.675 "trtype": "TCP", 00:20:19.675 "adrfam": "IPv4", 00:20:19.675 "traddr": "10.0.0.2", 00:20:19.675 "trsvcid": "4420" 00:20:19.675 }, 00:20:19.675 "peer_address": { 00:20:19.675 "trtype": "TCP", 00:20:19.675 "adrfam": "IPv4", 00:20:19.675 "traddr": "10.0.0.1", 00:20:19.675 "trsvcid": "42438" 00:20:19.675 }, 00:20:19.675 "auth": { 00:20:19.675 "state": "completed", 00:20:19.675 "digest": "sha512", 00:20:19.675 "dhgroup": "ffdhe6144" 00:20:19.675 } 00:20:19.675 } 00:20:19.675 ]' 00:20:19.675 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.675 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.675 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.675 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:19.675 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.675 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.675 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.675 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.956 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:20:19.956 09:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:20:20.536 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.536 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:20.537 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.537 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.537 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.537 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.537 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:20.537 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:20.798 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:20:20.798 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.798 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:20.798 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:20.798 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:20.798 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.799 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.799 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.799 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.799 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.799 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.799 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.799 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.061 00:20:21.061 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.061 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.061 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.322 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.322 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.323 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.323 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.323 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.323 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.323 { 00:20:21.323 "cntlid": 133, 00:20:21.323 "qid": 0, 00:20:21.323 "state": "enabled", 00:20:21.323 "thread": "nvmf_tgt_poll_group_000", 00:20:21.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:21.323 "listen_address": { 00:20:21.323 "trtype": "TCP", 00:20:21.323 "adrfam": "IPv4", 00:20:21.323 "traddr": "10.0.0.2", 00:20:21.323 "trsvcid": "4420" 00:20:21.323 }, 00:20:21.323 "peer_address": { 00:20:21.323 "trtype": "TCP", 00:20:21.323 "adrfam": "IPv4", 00:20:21.323 "traddr": "10.0.0.1", 00:20:21.323 "trsvcid": "42460" 00:20:21.323 }, 00:20:21.323 "auth": { 00:20:21.323 "state": "completed", 00:20:21.323 "digest": "sha512", 00:20:21.323 "dhgroup": "ffdhe6144" 00:20:21.323 } 00:20:21.323 } 00:20:21.323 ]' 00:20:21.323 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.323 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.323 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.323 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:21.323 09:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.323 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.323 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.323 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.584 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:20:21.584 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:20:22.156 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.156 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:22.156 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.156 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.156 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.156 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.156 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:22.156 09:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:22.417 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:20:22.417 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.417 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:22.417 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:22.417 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:22.417 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.417 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:22.417 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.417 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.417 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.417 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:22.417 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.417 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.677 00:20:22.677 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.677 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.677 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.938 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.938 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.938 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.938 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.938 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.938 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.938 { 00:20:22.938 "cntlid": 135, 00:20:22.938 "qid": 0, 00:20:22.938 "state": "enabled", 00:20:22.938 "thread": "nvmf_tgt_poll_group_000", 00:20:22.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:22.938 "listen_address": { 00:20:22.938 "trtype": "TCP", 00:20:22.938 "adrfam": "IPv4", 00:20:22.938 "traddr": "10.0.0.2", 00:20:22.938 "trsvcid": "4420" 00:20:22.938 }, 00:20:22.938 "peer_address": { 00:20:22.938 "trtype": "TCP", 00:20:22.938 "adrfam": "IPv4", 00:20:22.938 "traddr": "10.0.0.1", 00:20:22.938 "trsvcid": "42500" 00:20:22.938 }, 00:20:22.938 "auth": { 00:20:22.938 "state": "completed", 00:20:22.938 "digest": "sha512", 00:20:22.938 "dhgroup": "ffdhe6144" 00:20:22.938 } 00:20:22.938 } 00:20:22.938 ]' 00:20:22.938 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.938 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:22.938 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.938 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:22.938 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.201 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.201 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.201 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.201 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:20:23.201 09:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:20:23.774 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.036 09:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.610 00:20:24.610 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.610 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.610 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.872 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.872 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.872 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.872 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.872 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.872 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.872 { 00:20:24.872 "cntlid": 137, 00:20:24.872 "qid": 0, 00:20:24.872 "state": "enabled", 00:20:24.872 "thread": "nvmf_tgt_poll_group_000", 00:20:24.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:24.872 "listen_address": { 00:20:24.872 "trtype": "TCP", 00:20:24.872 "adrfam": "IPv4", 00:20:24.872 "traddr": "10.0.0.2", 00:20:24.872 "trsvcid": "4420" 00:20:24.872 }, 00:20:24.872 "peer_address": { 00:20:24.872 "trtype": "TCP", 00:20:24.872 "adrfam": "IPv4", 00:20:24.872 "traddr": "10.0.0.1", 00:20:24.872 "trsvcid": "42528" 00:20:24.872 }, 00:20:24.872 "auth": { 00:20:24.872 "state": "completed", 00:20:24.872 "digest": "sha512", 00:20:24.872 "dhgroup": "ffdhe8192" 00:20:24.872 } 00:20:24.872 } 00:20:24.872 ]' 00:20:24.872 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.872 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.872 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.872 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.872 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.872 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.872 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.872 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.133 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:20:25.133 09:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:20:25.705 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.705 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:25.705 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.705 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.705 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.705 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.705 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:25.705 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:25.966 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:20:25.966 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.966 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:25.966 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:25.966 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:25.966 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.966 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.966 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.966 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.966 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.966 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.966 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.967 09:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.538 00:20:26.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.538 { 00:20:26.538 "cntlid": 139, 00:20:26.538 "qid": 0, 00:20:26.538 "state": "enabled", 00:20:26.538 "thread": "nvmf_tgt_poll_group_000", 00:20:26.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:26.538 "listen_address": { 00:20:26.538 "trtype": "TCP", 00:20:26.538 "adrfam": "IPv4", 00:20:26.538 "traddr": "10.0.0.2", 00:20:26.538 "trsvcid": "4420" 00:20:26.538 }, 00:20:26.538 "peer_address": { 00:20:26.538 "trtype": "TCP", 00:20:26.538 "adrfam": "IPv4", 00:20:26.538 "traddr": "10.0.0.1", 00:20:26.538 "trsvcid": "42552" 00:20:26.538 }, 00:20:26.538 "auth": { 00:20:26.538 "state": "completed", 00:20:26.538 "digest": "sha512", 00:20:26.538 "dhgroup": "ffdhe8192" 00:20:26.538 } 00:20:26.538 } 00:20:26.538 ]' 00:20:26.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.538 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.539 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.801 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.801 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.801 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.801 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:20:26.801 09:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: --dhchap-ctrl-secret DHHC-1:02:NmQzY2ZmNDI1NGE2YWJmM2YzZjA2MzdiZGIxNzQyNzQ2MzNhNjU5ZWRkNDI1Zjhi9FYf8w==: 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.742 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.314 00:20:28.314 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.314 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.314 09:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.314 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.314 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.314 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.314 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.314 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.314 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.314 { 00:20:28.314 "cntlid": 141, 00:20:28.314 "qid": 0, 00:20:28.314 "state": "enabled", 00:20:28.314 "thread": "nvmf_tgt_poll_group_000", 00:20:28.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:28.314 "listen_address": { 00:20:28.314 "trtype": "TCP", 00:20:28.314 "adrfam": "IPv4", 00:20:28.314 "traddr": "10.0.0.2", 00:20:28.314 "trsvcid": "4420" 00:20:28.314 }, 00:20:28.314 "peer_address": { 00:20:28.314 "trtype": "TCP", 00:20:28.314 "adrfam": "IPv4", 00:20:28.314 "traddr": "10.0.0.1", 00:20:28.314 "trsvcid": "42580" 00:20:28.314 }, 00:20:28.314 "auth": { 00:20:28.314 "state": "completed", 00:20:28.314 "digest": "sha512", 00:20:28.314 "dhgroup": "ffdhe8192" 00:20:28.314 } 00:20:28.314 } 00:20:28.314 ]' 00:20:28.314 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.314 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.575 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.575 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:28.575 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.575 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.575 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.575 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.575 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:20:28.575 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:01:OTJjZTE3MjFhNjQ0NTY3MDJjNjIxODcyYWRhYmI0YzXsBsYp: 00:20:29.518 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.518 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:29.518 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.518 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.518 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.518 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.518 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:29.518 09:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:29.518 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:20:29.518 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.518 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:29.518 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:29.518 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:29.518 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.518 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:29.518 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.518 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.518 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.518 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:29.518 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.518 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.089 00:20:30.089 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.089 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.089 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.089 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.089 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.089 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.089 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.089 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.089 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.089 { 00:20:30.089 "cntlid": 143, 00:20:30.089 "qid": 0, 00:20:30.089 "state": "enabled", 00:20:30.089 "thread": "nvmf_tgt_poll_group_000", 00:20:30.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:30.089 "listen_address": { 00:20:30.089 "trtype": "TCP", 00:20:30.089 "adrfam": "IPv4", 00:20:30.089 "traddr": "10.0.0.2", 00:20:30.089 "trsvcid": "4420" 00:20:30.089 }, 00:20:30.089 "peer_address": { 00:20:30.089 "trtype": "TCP", 00:20:30.089 "adrfam": "IPv4", 00:20:30.089 "traddr": "10.0.0.1", 00:20:30.089 "trsvcid": "44618" 00:20:30.089 }, 00:20:30.089 "auth": { 00:20:30.089 "state": "completed", 00:20:30.089 "digest": "sha512", 00:20:30.089 "dhgroup": "ffdhe8192" 00:20:30.089 } 00:20:30.089 } 00:20:30.089 ]' 00:20:30.089 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.349 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.349 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.349 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:30.349 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.349 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.349 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.349 09:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.610 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:20:30.610 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:20:31.181 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.181 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:31.181 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.181 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.181 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.181 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:31.181 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:31.181 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:31.181 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:31.181 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:31.181 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:31.442 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:31.442 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.442 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:31.442 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:31.443 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:31.443 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.443 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.443 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.443 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.443 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.443 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.443 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.443 09:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.704 00:20:31.704 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.704 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.704 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.964 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.964 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.964 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.964 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.964 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.964 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.964 { 00:20:31.964 "cntlid": 145, 00:20:31.964 "qid": 0, 00:20:31.964 "state": "enabled", 00:20:31.965 "thread": "nvmf_tgt_poll_group_000", 00:20:31.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:31.965 "listen_address": { 00:20:31.965 "trtype": "TCP", 00:20:31.965 "adrfam": "IPv4", 00:20:31.965 "traddr": "10.0.0.2", 00:20:31.965 "trsvcid": "4420" 00:20:31.965 }, 00:20:31.965 "peer_address": { 00:20:31.965 "trtype": "TCP", 00:20:31.965 "adrfam": "IPv4", 00:20:31.965 "traddr": "10.0.0.1", 00:20:31.965 "trsvcid": "44660" 00:20:31.965 }, 00:20:31.965 "auth": { 00:20:31.965 "state": "completed", 00:20:31.965 "digest": "sha512", 00:20:31.965 "dhgroup": "ffdhe8192" 00:20:31.965 } 00:20:31.965 } 00:20:31.965 ]' 00:20:31.965 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.965 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.965 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.225 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:32.225 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.225 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.225 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.225 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.486 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:20:32.486 09:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YzNjZGExNTJjYjVjNmYzODQ0NDUyYTU3YTM3NmVmMGFkOTQyZmNjMWExODI3MDRlL4MMqg==: --dhchap-ctrl-secret DHHC-1:03:M2Q2MGFmYzM0MTA1MWQ3MmY1NWJjZTVjNDdjOWFiYzhmMjMyNGYzM2NiNTBlNTA5MTE1NWYxZTVlNWRhMTU0NNi3Kuo=: 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:33.058 09:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:33.630 request: 00:20:33.630 { 00:20:33.630 "name": "nvme0", 00:20:33.630 "trtype": "tcp", 00:20:33.630 "traddr": "10.0.0.2", 00:20:33.630 "adrfam": "ipv4", 00:20:33.630 "trsvcid": "4420", 00:20:33.630 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:33.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:33.630 "prchk_reftag": false, 00:20:33.630 "prchk_guard": false, 00:20:33.630 "hdgst": false, 00:20:33.630 "ddgst": false, 00:20:33.630 "dhchap_key": "key2", 00:20:33.630 "allow_unrecognized_csi": false, 00:20:33.630 "method": "bdev_nvme_attach_controller", 00:20:33.630 "req_id": 1 00:20:33.630 } 00:20:33.630 Got JSON-RPC error response 00:20:33.630 response: 00:20:33.630 { 00:20:33.630 "code": -5, 00:20:33.630 "message": "Input/output error" 00:20:33.630 } 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:33.630 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:33.892 request: 00:20:33.892 { 00:20:33.892 "name": "nvme0", 00:20:33.892 "trtype": "tcp", 00:20:33.892 "traddr": "10.0.0.2", 00:20:33.892 "adrfam": "ipv4", 00:20:33.892 "trsvcid": "4420", 00:20:33.892 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:33.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:33.892 "prchk_reftag": false, 00:20:33.892 "prchk_guard": false, 00:20:33.892 "hdgst": false, 00:20:33.892 "ddgst": false, 00:20:33.892 "dhchap_key": "key1", 00:20:33.892 "dhchap_ctrlr_key": "ckey2", 00:20:33.892 "allow_unrecognized_csi": false, 00:20:33.892 "method": "bdev_nvme_attach_controller", 00:20:33.892 "req_id": 1 00:20:33.892 } 00:20:33.892 Got JSON-RPC error response 00:20:33.892 response: 00:20:33.892 { 00:20:33.892 "code": -5, 00:20:33.892 "message": "Input/output error" 00:20:33.892 } 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.892 09:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.464 request: 00:20:34.464 { 00:20:34.464 "name": "nvme0", 00:20:34.464 "trtype": "tcp", 00:20:34.464 "traddr": "10.0.0.2", 00:20:34.464 "adrfam": "ipv4", 00:20:34.464 "trsvcid": "4420", 00:20:34.464 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:34.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:34.464 "prchk_reftag": false, 00:20:34.464 "prchk_guard": false, 00:20:34.464 "hdgst": false, 00:20:34.464 "ddgst": false, 00:20:34.464 "dhchap_key": "key1", 00:20:34.464 "dhchap_ctrlr_key": "ckey1", 00:20:34.464 "allow_unrecognized_csi": false, 00:20:34.464 "method": "bdev_nvme_attach_controller", 00:20:34.464 "req_id": 1 00:20:34.464 } 00:20:34.465 Got JSON-RPC error response 00:20:34.465 response: 00:20:34.465 { 00:20:34.465 "code": -5, 00:20:34.465 "message": "Input/output error" 00:20:34.465 } 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 307785 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 307785 ']' 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 307785 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 307785 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 307785' 00:20:34.465 killing process with pid 307785 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 307785 00:20:34.465 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 307785 00:20:34.725 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:34.725 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.725 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.725 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.725 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=334216 00:20:34.725 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 334216 00:20:34.725 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:34.725 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 334216 ']' 00:20:34.725 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.725 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.726 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.726 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.726 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 334216 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 334216 ']' 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.671 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.671 null0 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.KGu 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.20a ]] 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.20a 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.8Zg 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.qF5 ]] 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qF5 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.O6V 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.cqY ]] 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cqY 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tNa 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.933 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:35.934 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:35.934 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.934 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:35.934 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:35.934 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:35.934 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.934 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:35.934 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.934 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.934 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.934 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:35.934 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.934 09:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.506 nvme0n1 00:20:36.766 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.766 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.766 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.766 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.766 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.766 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.766 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.766 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.766 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.766 { 00:20:36.766 "cntlid": 1, 00:20:36.766 "qid": 0, 00:20:36.766 "state": "enabled", 00:20:36.766 "thread": "nvmf_tgt_poll_group_000", 00:20:36.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:36.766 "listen_address": { 00:20:36.766 "trtype": "TCP", 00:20:36.766 "adrfam": "IPv4", 00:20:36.766 "traddr": "10.0.0.2", 00:20:36.766 "trsvcid": "4420" 00:20:36.766 }, 00:20:36.766 "peer_address": { 00:20:36.766 "trtype": "TCP", 00:20:36.766 "adrfam": "IPv4", 00:20:36.766 "traddr": "10.0.0.1", 00:20:36.766 "trsvcid": "44710" 00:20:36.766 }, 00:20:36.766 "auth": { 00:20:36.766 "state": "completed", 00:20:36.766 "digest": "sha512", 00:20:36.767 "dhgroup": "ffdhe8192" 00:20:36.767 } 00:20:36.767 } 00:20:36.767 ]' 00:20:36.767 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.767 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.767 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.027 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:37.027 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.027 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.027 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.027 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.027 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:20:37.027 09:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.970 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.231 request: 00:20:38.231 { 00:20:38.231 "name": "nvme0", 00:20:38.231 "trtype": "tcp", 00:20:38.231 "traddr": "10.0.0.2", 00:20:38.231 "adrfam": "ipv4", 00:20:38.231 "trsvcid": "4420", 00:20:38.231 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:38.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:38.231 "prchk_reftag": false, 00:20:38.231 "prchk_guard": false, 00:20:38.231 "hdgst": false, 00:20:38.231 "ddgst": false, 00:20:38.231 "dhchap_key": "key3", 00:20:38.232 "allow_unrecognized_csi": false, 00:20:38.232 "method": "bdev_nvme_attach_controller", 00:20:38.232 "req_id": 1 00:20:38.232 } 00:20:38.232 Got JSON-RPC error response 00:20:38.232 response: 00:20:38.232 { 00:20:38.232 "code": -5, 00:20:38.232 "message": "Input/output error" 00:20:38.232 } 00:20:38.232 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:38.232 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:38.232 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:38.232 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:38.232 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:38.232 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:38.232 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:38.232 09:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.494 request: 00:20:38.494 { 00:20:38.494 "name": "nvme0", 00:20:38.494 "trtype": "tcp", 00:20:38.494 "traddr": "10.0.0.2", 00:20:38.494 "adrfam": "ipv4", 00:20:38.494 "trsvcid": "4420", 00:20:38.494 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:38.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:38.494 "prchk_reftag": false, 00:20:38.494 "prchk_guard": false, 00:20:38.494 "hdgst": false, 00:20:38.494 "ddgst": false, 00:20:38.494 "dhchap_key": "key3", 00:20:38.494 "allow_unrecognized_csi": false, 00:20:38.494 "method": "bdev_nvme_attach_controller", 00:20:38.494 "req_id": 1 00:20:38.494 } 00:20:38.494 Got JSON-RPC error response 00:20:38.494 response: 00:20:38.494 { 00:20:38.494 "code": -5, 00:20:38.494 "message": "Input/output error" 00:20:38.494 } 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:38.494 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:38.756 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:39.017 request: 00:20:39.017 { 00:20:39.017 "name": "nvme0", 00:20:39.017 "trtype": "tcp", 00:20:39.017 "traddr": "10.0.0.2", 00:20:39.017 "adrfam": "ipv4", 00:20:39.017 "trsvcid": "4420", 00:20:39.017 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:39.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:39.017 "prchk_reftag": false, 00:20:39.017 "prchk_guard": false, 00:20:39.017 "hdgst": false, 00:20:39.017 "ddgst": false, 00:20:39.017 "dhchap_key": "key0", 00:20:39.017 "dhchap_ctrlr_key": "key1", 00:20:39.017 "allow_unrecognized_csi": false, 00:20:39.017 "method": "bdev_nvme_attach_controller", 00:20:39.017 "req_id": 1 00:20:39.017 } 00:20:39.017 Got JSON-RPC error response 00:20:39.017 response: 00:20:39.017 { 00:20:39.017 "code": -5, 00:20:39.018 "message": "Input/output error" 00:20:39.018 } 00:20:39.018 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:39.018 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:39.018 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:39.018 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:39.018 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:39.018 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:39.018 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:39.279 nvme0n1 00:20:39.279 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:39.279 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:39.280 09:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.540 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.540 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.540 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.802 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:39.802 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.802 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.802 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.802 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:39.802 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:39.802 09:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:40.375 nvme0n1 00:20:40.375 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:40.375 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:40.375 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.635 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.635 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:40.635 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.635 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.635 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.635 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:40.635 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:40.635 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.896 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.896 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:20:40.896 09:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: --dhchap-ctrl-secret DHHC-1:03:MGQ4NjFiYjU2NTA5NmYyNjRiOWIyM2NjYzZhMmQwYTAyNDI4YWEzMDViNWFhNThiYzIyMGRkZDZiN2M2Mjk3MmFcVGY=: 00:20:41.467 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:41.467 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:41.467 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:41.467 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:41.467 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:41.467 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:41.467 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:41.467 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.467 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.728 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:41.729 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:41.729 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:41.729 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:41.729 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.729 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:41.729 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.729 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:41.729 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:41.729 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:41.989 request: 00:20:41.989 { 00:20:41.989 "name": "nvme0", 00:20:41.989 "trtype": "tcp", 00:20:41.989 "traddr": "10.0.0.2", 00:20:41.989 "adrfam": "ipv4", 00:20:41.989 "trsvcid": "4420", 00:20:41.989 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:41.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:41.989 "prchk_reftag": false, 00:20:41.989 "prchk_guard": false, 00:20:41.989 "hdgst": false, 00:20:41.990 "ddgst": false, 00:20:41.990 "dhchap_key": "key1", 00:20:41.990 "allow_unrecognized_csi": false, 00:20:41.990 "method": "bdev_nvme_attach_controller", 00:20:41.990 "req_id": 1 00:20:41.990 } 00:20:41.990 Got JSON-RPC error response 00:20:41.990 response: 00:20:41.990 { 00:20:41.990 "code": -5, 00:20:41.990 "message": "Input/output error" 00:20:41.990 } 00:20:41.990 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:41.990 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:41.990 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:41.990 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:41.990 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:41.990 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:41.990 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:42.966 nvme0n1 00:20:42.966 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:20:42.966 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:20:42.966 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.966 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.966 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.966 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.273 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:43.273 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.273 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.273 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.273 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:20:43.273 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:43.273 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:43.581 nvme0n1 00:20:43.581 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:20:43.581 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:20:43.581 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.581 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.581 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.581 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.896 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:43.896 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.896 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.896 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.896 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: '' 2s 00:20:43.896 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:43.896 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:43.896 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: 00:20:43.896 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:20:43.896 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:43.896 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:43.896 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: ]] 00:20:43.896 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZGVkZDJjMDYwMzQzMmY5MTNjZjNjMTkxOTE4ZTJkOWSp0n0/: 00:20:43.896 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:20:43.896 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:43.896 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:45.806 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:45.806 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:45.806 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:45.806 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:45.806 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:45.806 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:45.806 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:45.806 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:45.806 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.806 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.806 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.807 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: 2s 00:20:45.807 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:45.807 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:45.807 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:45.807 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: 00:20:45.807 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:45.807 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:45.807 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:45.807 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: ]] 00:20:45.807 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MGU2YjBlZjAyOWYxN2JhNjA1N2U4ZjQ1NjdmNGI4M2UxYmZjNTViZWViZDUzOWFmvFT8Dg==: 00:20:45.807 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:45.807 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:48.348 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:20:48.348 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:48.348 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:48.348 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:48.348 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:48.348 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:48.348 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:48.348 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.348 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:48.348 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.348 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.348 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.348 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:48.348 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:48.348 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:48.609 nvme0n1 00:20:48.609 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:48.609 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.609 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.609 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.609 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:48.609 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:49.181 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:20:49.181 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:20:49.181 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.442 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.442 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:49.442 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.442 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.442 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.442 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:20:49.442 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:20:49.442 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:20:49.442 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:20:49.442 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.703 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.703 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:49.703 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.703 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.703 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.703 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:49.703 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:49.703 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:49.703 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:49.703 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:49.703 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:49.703 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:49.703 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:49.703 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:50.273 request: 00:20:50.273 { 00:20:50.273 "name": "nvme0", 00:20:50.273 "dhchap_key": "key1", 00:20:50.273 "dhchap_ctrlr_key": "key3", 00:20:50.273 "method": "bdev_nvme_set_keys", 00:20:50.273 "req_id": 1 00:20:50.273 } 00:20:50.273 Got JSON-RPC error response 00:20:50.273 response: 00:20:50.273 { 00:20:50.273 "code": -13, 00:20:50.273 "message": "Permission denied" 00:20:50.273 } 00:20:50.273 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:50.273 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:50.273 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:50.273 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:50.273 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:50.273 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:50.273 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.273 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:20:50.273 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:20:51.654 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:51.654 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:51.654 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.654 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:20:51.654 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:51.654 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.654 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.654 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.654 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:51.654 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:51.654 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:52.225 nvme0n1 00:20:52.225 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:52.225 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.225 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.225 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.225 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:52.225 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:52.225 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:52.225 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:52.225 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.225 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:52.225 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.225 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:52.225 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:52.797 request: 00:20:52.797 { 00:20:52.797 "name": "nvme0", 00:20:52.797 "dhchap_key": "key2", 00:20:52.797 "dhchap_ctrlr_key": "key0", 00:20:52.797 "method": "bdev_nvme_set_keys", 00:20:52.797 "req_id": 1 00:20:52.797 } 00:20:52.797 Got JSON-RPC error response 00:20:52.797 response: 00:20:52.797 { 00:20:52.797 "code": -13, 00:20:52.797 "message": "Permission denied" 00:20:52.797 } 00:20:52.797 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:52.797 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:52.797 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:52.797 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:52.797 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:52.798 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:52.798 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.058 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:20:53.058 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:20:54.001 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:54.001 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:54.001 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.001 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:20:54.001 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:20:54.001 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:20:54.001 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 308061 00:20:54.001 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 308061 ']' 00:20:54.262 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 308061 00:20:54.262 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:54.262 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.262 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 308061 00:20:54.262 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:54.262 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:54.262 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 308061' 00:20:54.262 killing process with pid 308061 00:20:54.262 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 308061 00:20:54.262 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 308061 00:20:54.262 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:54.262 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:54.262 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:20:54.262 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:54.262 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:20:54.262 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:54.262 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:54.523 rmmod nvme_tcp 00:20:54.523 rmmod nvme_fabrics 00:20:54.523 rmmod nvme_keyring 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 334216 ']' 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 334216 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 334216 ']' 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 334216 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 334216 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 334216' 00:20:54.523 killing process with pid 334216 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 334216 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 334216 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.523 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.071 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:57.071 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.KGu /tmp/spdk.key-sha256.8Zg /tmp/spdk.key-sha384.O6V /tmp/spdk.key-sha512.tNa /tmp/spdk.key-sha512.20a /tmp/spdk.key-sha384.qF5 /tmp/spdk.key-sha256.cqY '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:57.071 00:20:57.072 real 2m39.348s 00:20:57.072 user 5m57.501s 00:20:57.072 sys 0m24.724s 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.072 ************************************ 00:20:57.072 END TEST nvmf_auth_target 00:20:57.072 ************************************ 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:57.072 ************************************ 00:20:57.072 START TEST nvmf_bdevio_no_huge 00:20:57.072 ************************************ 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:57.072 * Looking for test storage... 00:20:57.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:57.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.072 --rc genhtml_branch_coverage=1 00:20:57.072 --rc genhtml_function_coverage=1 00:20:57.072 --rc genhtml_legend=1 00:20:57.072 --rc geninfo_all_blocks=1 00:20:57.072 --rc geninfo_unexecuted_blocks=1 00:20:57.072 00:20:57.072 ' 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:57.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.072 --rc genhtml_branch_coverage=1 00:20:57.072 --rc genhtml_function_coverage=1 00:20:57.072 --rc genhtml_legend=1 00:20:57.072 --rc geninfo_all_blocks=1 00:20:57.072 --rc geninfo_unexecuted_blocks=1 00:20:57.072 00:20:57.072 ' 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:57.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.072 --rc genhtml_branch_coverage=1 00:20:57.072 --rc genhtml_function_coverage=1 00:20:57.072 --rc genhtml_legend=1 00:20:57.072 --rc geninfo_all_blocks=1 00:20:57.072 --rc geninfo_unexecuted_blocks=1 00:20:57.072 00:20:57.072 ' 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:57.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.072 --rc genhtml_branch_coverage=1 00:20:57.072 --rc genhtml_function_coverage=1 00:20:57.072 --rc genhtml_legend=1 00:20:57.072 --rc geninfo_all_blocks=1 00:20:57.072 --rc geninfo_unexecuted_blocks=1 00:20:57.072 00:20:57.072 ' 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.072 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:57.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:20:57.073 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:05.217 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.217 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:21:05.217 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:05.217 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:05.217 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:05.217 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:05.217 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:05.217 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:21:05.217 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:05.217 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:05.218 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:05.218 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:05.218 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:05.218 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:05.218 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.218 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.218 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.218 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:05.218 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:05.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:21:05.218 00:21:05.218 --- 10.0.0.2 ping statistics --- 00:21:05.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.218 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:21:05.218 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:21:05.218 00:21:05.218 --- 10.0.0.1 ping statistics --- 00:21:05.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.218 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:21:05.218 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.218 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:21:05.218 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:05.218 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.218 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:05.218 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:05.219 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.219 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:05.219 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:05.219 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:05.219 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:05.219 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:05.219 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:05.219 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=342391 00:21:05.219 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 342391 00:21:05.219 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:05.219 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 342391 ']' 00:21:05.219 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.219 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.219 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.219 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.219 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:05.219 [2024-11-19 09:38:51.170860] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:05.219 [2024-11-19 09:38:51.170932] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:05.219 [2024-11-19 09:38:51.276919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:05.219 [2024-11-19 09:38:51.337803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.219 [2024-11-19 09:38:51.337850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.219 [2024-11-19 09:38:51.337859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.219 [2024-11-19 09:38:51.337866] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.219 [2024-11-19 09:38:51.337872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.219 [2024-11-19 09:38:51.339661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:05.219 [2024-11-19 09:38:51.339823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:05.219 [2024-11-19 09:38:51.339982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:05.219 [2024-11-19 09:38:51.339982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:05.480 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.480 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:21:05.480 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:05.480 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:05.480 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:05.480 [2024-11-19 09:38:52.049998] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:05.480 Malloc0 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:05.480 [2024-11-19 09:38:52.103891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.480 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.481 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:05.481 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:05.481 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:21:05.481 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:21:05.481 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.481 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.481 { 00:21:05.481 "params": { 00:21:05.481 "name": "Nvme$subsystem", 00:21:05.481 "trtype": "$TEST_TRANSPORT", 00:21:05.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.481 "adrfam": "ipv4", 00:21:05.481 "trsvcid": "$NVMF_PORT", 00:21:05.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.481 "hdgst": ${hdgst:-false}, 00:21:05.481 "ddgst": ${ddgst:-false} 00:21:05.481 }, 00:21:05.481 "method": "bdev_nvme_attach_controller" 00:21:05.481 } 00:21:05.481 EOF 00:21:05.481 )") 00:21:05.481 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:21:05.481 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:21:05.481 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:21:05.481 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:05.481 "params": { 00:21:05.481 "name": "Nvme1", 00:21:05.481 "trtype": "tcp", 00:21:05.481 "traddr": "10.0.0.2", 00:21:05.481 "adrfam": "ipv4", 00:21:05.481 "trsvcid": "4420", 00:21:05.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.481 "hdgst": false, 00:21:05.481 "ddgst": false 00:21:05.481 }, 00:21:05.481 "method": "bdev_nvme_attach_controller" 00:21:05.481 }' 00:21:05.481 [2024-11-19 09:38:52.161829] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:05.481 [2024-11-19 09:38:52.161898] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid342730 ] 00:21:05.741 [2024-11-19 09:38:52.258856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:05.741 [2024-11-19 09:38:52.319792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.741 [2024-11-19 09:38:52.319953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.741 [2024-11-19 09:38:52.319953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.002 I/O targets: 00:21:06.002 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:06.002 00:21:06.002 00:21:06.002 CUnit - A unit testing framework for C - Version 2.1-3 00:21:06.002 http://cunit.sourceforge.net/ 00:21:06.002 00:21:06.002 00:21:06.002 Suite: bdevio tests on: Nvme1n1 00:21:06.002 Test: blockdev write read block ...passed 00:21:06.002 Test: blockdev write zeroes read block ...passed 00:21:06.002 Test: blockdev write zeroes read no split ...passed 00:21:06.263 Test: blockdev write zeroes read split ...passed 00:21:06.263 Test: blockdev write zeroes read split partial ...passed 00:21:06.263 Test: blockdev reset ...[2024-11-19 09:38:52.772904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:06.263 [2024-11-19 09:38:52.773003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5800 (9): Bad file descriptor 00:21:06.263 [2024-11-19 09:38:52.924199] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:21:06.263 passed 00:21:06.263 Test: blockdev write read 8 blocks ...passed 00:21:06.263 Test: blockdev write read size > 128k ...passed 00:21:06.263 Test: blockdev write read invalid size ...passed 00:21:06.524 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:06.524 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:06.524 Test: blockdev write read max offset ...passed 00:21:06.524 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:06.524 Test: blockdev writev readv 8 blocks ...passed 00:21:06.524 Test: blockdev writev readv 30 x 1block ...passed 00:21:06.524 Test: blockdev writev readv block ...passed 00:21:06.524 Test: blockdev writev readv size > 128k ...passed 00:21:06.524 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:06.524 Test: blockdev comparev and writev ...[2024-11-19 09:38:53.192276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.524 [2024-11-19 09:38:53.192326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.524 [2024-11-19 09:38:53.192343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.524 [2024-11-19 09:38:53.192360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.524 [2024-11-19 09:38:53.192936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.524 [2024-11-19 09:38:53.192950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:06.524 [2024-11-19 09:38:53.192964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.524 [2024-11-19 09:38:53.192972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:06.524 [2024-11-19 09:38:53.193570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.524 [2024-11-19 09:38:53.193586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:06.524 [2024-11-19 09:38:53.193601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.524 [2024-11-19 09:38:53.193611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:06.524 [2024-11-19 09:38:53.194182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.524 [2024-11-19 09:38:53.194198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:06.524 [2024-11-19 09:38:53.194213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.524 [2024-11-19 09:38:53.194220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:06.524 passed 00:21:06.785 Test: blockdev nvme passthru rw ...passed 00:21:06.785 Test: blockdev nvme passthru vendor specific ...[2024-11-19 09:38:53.279105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:06.785 [2024-11-19 09:38:53.279124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:06.785 [2024-11-19 09:38:53.279498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:06.785 [2024-11-19 09:38:53.279513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:06.785 [2024-11-19 09:38:53.279887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:06.785 [2024-11-19 09:38:53.279902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:06.785 [2024-11-19 09:38:53.280281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:06.785 [2024-11-19 09:38:53.280293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:06.785 passed 00:21:06.785 Test: blockdev nvme admin passthru ...passed 00:21:06.785 Test: blockdev copy ...passed 00:21:06.785 00:21:06.785 Run Summary: Type Total Ran Passed Failed Inactive 00:21:06.785 suites 1 1 n/a 0 0 00:21:06.785 tests 23 23 23 0 0 00:21:06.785 asserts 152 152 152 0 n/a 00:21:06.785 00:21:06.785 Elapsed time = 1.431 seconds 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:07.046 rmmod nvme_tcp 00:21:07.046 rmmod nvme_fabrics 00:21:07.046 rmmod nvme_keyring 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 342391 ']' 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 342391 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 342391 ']' 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 342391 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.046 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 342391 00:21:07.307 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:21:07.307 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:21:07.307 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 342391' 00:21:07.307 killing process with pid 342391 00:21:07.307 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 342391 00:21:07.307 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 342391 00:21:07.307 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:07.307 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:07.307 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:07.307 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:07.307 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:07.307 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:21:07.307 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:21:07.307 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:07.307 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:07.307 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.307 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.307 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:09.855 00:21:09.855 real 0m12.703s 00:21:09.855 user 0m15.520s 00:21:09.855 sys 0m6.701s 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:09.855 ************************************ 00:21:09.855 END TEST nvmf_bdevio_no_huge 00:21:09.855 ************************************ 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:09.855 ************************************ 00:21:09.855 START TEST nvmf_tls 00:21:09.855 ************************************ 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:09.855 * Looking for test storage... 00:21:09.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:09.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.855 --rc genhtml_branch_coverage=1 00:21:09.855 --rc genhtml_function_coverage=1 00:21:09.855 --rc genhtml_legend=1 00:21:09.855 --rc geninfo_all_blocks=1 00:21:09.855 --rc geninfo_unexecuted_blocks=1 00:21:09.855 00:21:09.855 ' 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:09.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.855 --rc genhtml_branch_coverage=1 00:21:09.855 --rc genhtml_function_coverage=1 00:21:09.855 --rc genhtml_legend=1 00:21:09.855 --rc geninfo_all_blocks=1 00:21:09.855 --rc geninfo_unexecuted_blocks=1 00:21:09.855 00:21:09.855 ' 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:09.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.855 --rc genhtml_branch_coverage=1 00:21:09.855 --rc genhtml_function_coverage=1 00:21:09.855 --rc genhtml_legend=1 00:21:09.855 --rc geninfo_all_blocks=1 00:21:09.855 --rc geninfo_unexecuted_blocks=1 00:21:09.855 00:21:09.855 ' 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:09.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.855 --rc genhtml_branch_coverage=1 00:21:09.855 --rc genhtml_function_coverage=1 00:21:09.855 --rc genhtml_legend=1 00:21:09.855 --rc geninfo_all_blocks=1 00:21:09.855 --rc geninfo_unexecuted_blocks=1 00:21:09.855 00:21:09.855 ' 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.855 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:09.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:21:09.856 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.005 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.005 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:21:18.005 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:18.005 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:18.005 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:18.005 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:18.005 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:18.005 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:21:18.005 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:18.005 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:21:18.005 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:21:18.005 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:18.006 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:18.006 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:18.006 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:18.006 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:18.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:21:18.006 00:21:18.006 --- 10.0.0.2 ping statistics --- 00:21:18.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.006 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:21:18.006 00:21:18.006 --- 10.0.0.1 ping statistics --- 00:21:18.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.006 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:18.006 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:18.007 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.007 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.007 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=347272 00:21:18.007 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 347272 00:21:18.007 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:18.007 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 347272 ']' 00:21:18.007 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.007 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.007 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.007 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.007 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.007 [2024-11-19 09:39:04.002575] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:18.007 [2024-11-19 09:39:04.002646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.007 [2024-11-19 09:39:04.103746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.007 [2024-11-19 09:39:04.154811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.007 [2024-11-19 09:39:04.154862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.007 [2024-11-19 09:39:04.154870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.007 [2024-11-19 09:39:04.154878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.007 [2024-11-19 09:39:04.154884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.007 [2024-11-19 09:39:04.155680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.268 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.268 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:18.268 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:18.268 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:18.268 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.268 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.268 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:21:18.268 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:18.530 true 00:21:18.530 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:18.530 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:21:18.530 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:21:18.530 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:21:18.530 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:18.791 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:18.791 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:21:19.052 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:21:19.052 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:21:19.052 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:19.052 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:19.052 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:21:19.313 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:21:19.313 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:21:19.313 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:19.313 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:21:19.575 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:21:19.575 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:21:19.575 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:19.835 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:19.835 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:19.835 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:21:19.835 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:21:19.835 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:20.095 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:20.095 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:20.356 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.BDpNlVjCsn 00:21:20.357 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:21:20.357 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.ufSqlB3t9G 00:21:20.357 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:20.357 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:20.357 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.BDpNlVjCsn 00:21:20.357 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.ufSqlB3t9G 00:21:20.357 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:20.618 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:20.618 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.BDpNlVjCsn 00:21:20.618 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BDpNlVjCsn 00:21:20.618 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:20.879 [2024-11-19 09:39:07.505167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.879 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:21.140 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:21.140 [2024-11-19 09:39:07.821927] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:21.140 [2024-11-19 09:39:07.822128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.140 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:21.400 malloc0 00:21:21.400 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:21.660 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BDpNlVjCsn 00:21:21.660 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:21.921 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.BDpNlVjCsn 00:21:31.916 Initializing NVMe Controllers 00:21:31.916 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:31.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:31.916 Initialization complete. Launching workers. 00:21:31.916 ======================================================== 00:21:31.916 Latency(us) 00:21:31.916 Device Information : IOPS MiB/s Average min max 00:21:31.916 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18632.06 72.78 3435.16 1177.69 4078.67 00:21:31.916 ======================================================== 00:21:31.916 Total : 18632.06 72.78 3435.16 1177.69 4078.67 00:21:31.916 00:21:31.916 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BDpNlVjCsn 00:21:31.916 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:31.916 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:31.916 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:31.916 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BDpNlVjCsn 00:21:31.916 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:31.916 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=350705 00:21:31.916 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:31.916 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 350705 /var/tmp/bdevperf.sock 00:21:31.916 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:31.916 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 350705 ']' 00:21:31.916 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.916 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.916 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.916 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.916 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.176 [2024-11-19 09:39:18.706470] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:32.176 [2024-11-19 09:39:18.706525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350705 ] 00:21:32.176 [2024-11-19 09:39:18.793858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.176 [2024-11-19 09:39:18.829224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.747 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.007 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:33.007 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BDpNlVjCsn 00:21:33.007 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:33.269 [2024-11-19 09:39:19.812513] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:33.269 TLSTESTn1 00:21:33.269 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:33.269 Running I/O for 10 seconds... 00:21:35.593 1582.00 IOPS, 6.18 MiB/s [2024-11-19T08:39:23.280Z] 1951.50 IOPS, 7.62 MiB/s [2024-11-19T08:39:24.220Z] 1956.00 IOPS, 7.64 MiB/s [2024-11-19T08:39:25.158Z] 2776.50 IOPS, 10.85 MiB/s [2024-11-19T08:39:26.097Z] 2747.60 IOPS, 10.73 MiB/s [2024-11-19T08:39:27.037Z] 2758.67 IOPS, 10.78 MiB/s [2024-11-19T08:39:28.418Z] 2590.86 IOPS, 10.12 MiB/s [2024-11-19T08:39:29.374Z] 3031.62 IOPS, 11.84 MiB/s [2024-11-19T08:39:30.316Z] 2949.56 IOPS, 11.52 MiB/s [2024-11-19T08:39:30.316Z] 2933.50 IOPS, 11.46 MiB/s 00:21:43.568 Latency(us) 00:21:43.568 [2024-11-19T08:39:30.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.568 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:43.568 Verification LBA range: start 0x0 length 0x2000 00:21:43.568 TLSTESTn1 : 10.07 2926.01 11.43 0.00 0.00 43594.49 6007.47 99177.81 00:21:43.568 [2024-11-19T08:39:30.316Z] =================================================================================================================== 00:21:43.568 [2024-11-19T08:39:30.316Z] Total : 2926.01 11.43 0.00 0.00 43594.49 6007.47 99177.81 00:21:43.568 { 00:21:43.568 "results": [ 00:21:43.568 { 00:21:43.568 "job": "TLSTESTn1", 00:21:43.568 "core_mask": "0x4", 00:21:43.568 "workload": "verify", 00:21:43.568 "status": "finished", 00:21:43.568 "verify_range": { 00:21:43.568 "start": 0, 00:21:43.568 "length": 8192 00:21:43.568 }, 00:21:43.568 "queue_depth": 128, 00:21:43.568 "io_size": 4096, 00:21:43.568 "runtime": 10.069356, 00:21:43.568 "iops": 2926.0063900809546, 00:21:43.568 "mibps": 11.429712461253729, 00:21:43.568 "io_failed": 0, 00:21:43.568 "io_timeout": 0, 00:21:43.568 "avg_latency_us": 43594.486734774684, 00:21:43.568 "min_latency_us": 6007.466666666666, 00:21:43.568 "max_latency_us": 99177.81333333334 00:21:43.568 } 00:21:43.568 ], 00:21:43.568 "core_count": 1 00:21:43.568 } 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 350705 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 350705 ']' 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 350705 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 350705 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 350705' 00:21:43.568 killing process with pid 350705 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 350705 00:21:43.568 Received shutdown signal, test time was about 10.000000 seconds 00:21:43.568 00:21:43.568 Latency(us) 00:21:43.568 [2024-11-19T08:39:30.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.568 [2024-11-19T08:39:30.316Z] =================================================================================================================== 00:21:43.568 [2024-11-19T08:39:30.316Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 350705 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ufSqlB3t9G 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ufSqlB3t9G 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ufSqlB3t9G 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ufSqlB3t9G 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=353001 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 353001 /var/tmp/bdevperf.sock 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 353001 ']' 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.568 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.828 [2024-11-19 09:39:30.341771] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:43.828 [2024-11-19 09:39:30.341831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353001 ] 00:21:43.828 [2024-11-19 09:39:30.426012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.828 [2024-11-19 09:39:30.454924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.398 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.398 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:44.398 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ufSqlB3t9G 00:21:44.659 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:44.919 [2024-11-19 09:39:31.449379] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.919 [2024-11-19 09:39:31.458296] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:44.919 [2024-11-19 09:39:31.458478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1328bb0 (107): Transport endpoint is not connected 00:21:44.919 [2024-11-19 09:39:31.459474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1328bb0 (9): Bad file descriptor 00:21:44.919 [2024-11-19 09:39:31.460476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:44.919 [2024-11-19 09:39:31.460484] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:44.919 [2024-11-19 09:39:31.460489] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:44.919 [2024-11-19 09:39:31.460497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:44.919 request: 00:21:44.919 { 00:21:44.919 "name": "TLSTEST", 00:21:44.919 "trtype": "tcp", 00:21:44.919 "traddr": "10.0.0.2", 00:21:44.919 "adrfam": "ipv4", 00:21:44.919 "trsvcid": "4420", 00:21:44.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:44.919 "prchk_reftag": false, 00:21:44.919 "prchk_guard": false, 00:21:44.919 "hdgst": false, 00:21:44.919 "ddgst": false, 00:21:44.919 "psk": "key0", 00:21:44.919 "allow_unrecognized_csi": false, 00:21:44.919 "method": "bdev_nvme_attach_controller", 00:21:44.919 "req_id": 1 00:21:44.919 } 00:21:44.919 Got JSON-RPC error response 00:21:44.919 response: 00:21:44.919 { 00:21:44.919 "code": -5, 00:21:44.919 "message": "Input/output error" 00:21:44.919 } 00:21:44.919 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 353001 00:21:44.919 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 353001 ']' 00:21:44.919 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 353001 00:21:44.919 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:44.919 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:44.919 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353001 00:21:44.919 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:44.919 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:44.919 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353001' 00:21:44.919 killing process with pid 353001 00:21:44.919 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 353001 00:21:44.920 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.920 00:21:44.920 Latency(us) 00:21:44.920 [2024-11-19T08:39:31.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.920 [2024-11-19T08:39:31.668Z] =================================================================================================================== 00:21:44.920 [2024-11-19T08:39:31.668Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 353001 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BDpNlVjCsn 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BDpNlVjCsn 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BDpNlVjCsn 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BDpNlVjCsn 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=353153 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 353153 /var/tmp/bdevperf.sock 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 353153 ']' 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.920 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.180 [2024-11-19 09:39:31.703498] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:45.180 [2024-11-19 09:39:31.703551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353153 ] 00:21:45.180 [2024-11-19 09:39:31.785338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.180 [2024-11-19 09:39:31.814367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.752 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.752 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:45.752 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BDpNlVjCsn 00:21:46.012 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:46.273 [2024-11-19 09:39:32.780809] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:46.273 [2024-11-19 09:39:32.791446] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:46.273 [2024-11-19 09:39:32.791473] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:46.273 [2024-11-19 09:39:32.791492] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:46.273 [2024-11-19 09:39:32.791848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127dbb0 (107): Transport endpoint is not connected 00:21:46.273 [2024-11-19 09:39:32.792843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127dbb0 (9): Bad file descriptor 00:21:46.273 [2024-11-19 09:39:32.793845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:46.273 [2024-11-19 09:39:32.793851] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:46.273 [2024-11-19 09:39:32.793857] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:46.273 [2024-11-19 09:39:32.793864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:46.273 request: 00:21:46.273 { 00:21:46.273 "name": "TLSTEST", 00:21:46.273 "trtype": "tcp", 00:21:46.273 "traddr": "10.0.0.2", 00:21:46.273 "adrfam": "ipv4", 00:21:46.273 "trsvcid": "4420", 00:21:46.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:46.273 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:46.273 "prchk_reftag": false, 00:21:46.273 "prchk_guard": false, 00:21:46.273 "hdgst": false, 00:21:46.273 "ddgst": false, 00:21:46.273 "psk": "key0", 00:21:46.273 "allow_unrecognized_csi": false, 00:21:46.273 "method": "bdev_nvme_attach_controller", 00:21:46.273 "req_id": 1 00:21:46.273 } 00:21:46.273 Got JSON-RPC error response 00:21:46.273 response: 00:21:46.273 { 00:21:46.273 "code": -5, 00:21:46.273 "message": "Input/output error" 00:21:46.273 } 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 353153 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 353153 ']' 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 353153 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353153 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353153' 00:21:46.273 killing process with pid 353153 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 353153 00:21:46.273 Received shutdown signal, test time was about 10.000000 seconds 00:21:46.273 00:21:46.273 Latency(us) 00:21:46.273 [2024-11-19T08:39:33.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.273 [2024-11-19T08:39:33.021Z] =================================================================================================================== 00:21:46.273 [2024-11-19T08:39:33.021Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 353153 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BDpNlVjCsn 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BDpNlVjCsn 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BDpNlVjCsn 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BDpNlVjCsn 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=353415 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 353415 /var/tmp/bdevperf.sock 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 353415 ']' 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.273 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.534 [2024-11-19 09:39:33.026612] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:46.534 [2024-11-19 09:39:33.026669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353415 ] 00:21:46.534 [2024-11-19 09:39:33.109721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.534 [2024-11-19 09:39:33.138382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.105 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.105 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:47.105 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BDpNlVjCsn 00:21:47.365 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:47.626 [2024-11-19 09:39:34.133228] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:47.626 [2024-11-19 09:39:34.144743] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:47.626 [2024-11-19 09:39:34.144761] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:47.626 [2024-11-19 09:39:34.144778] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:47.626 [2024-11-19 09:39:34.145320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1456bb0 (107): Transport endpoint is not connected 00:21:47.626 [2024-11-19 09:39:34.146316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1456bb0 (9): Bad file descriptor 00:21:47.626 [2024-11-19 09:39:34.147318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:21:47.626 [2024-11-19 09:39:34.147325] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:47.626 [2024-11-19 09:39:34.147331] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:21:47.626 [2024-11-19 09:39:34.147338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:21:47.626 request: 00:21:47.626 { 00:21:47.626 "name": "TLSTEST", 00:21:47.626 "trtype": "tcp", 00:21:47.626 "traddr": "10.0.0.2", 00:21:47.626 "adrfam": "ipv4", 00:21:47.626 "trsvcid": "4420", 00:21:47.626 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:47.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:47.626 "prchk_reftag": false, 00:21:47.626 "prchk_guard": false, 00:21:47.626 "hdgst": false, 00:21:47.626 "ddgst": false, 00:21:47.626 "psk": "key0", 00:21:47.626 "allow_unrecognized_csi": false, 00:21:47.626 "method": "bdev_nvme_attach_controller", 00:21:47.626 "req_id": 1 00:21:47.626 } 00:21:47.626 Got JSON-RPC error response 00:21:47.626 response: 00:21:47.626 { 00:21:47.626 "code": -5, 00:21:47.626 "message": "Input/output error" 00:21:47.626 } 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 353415 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 353415 ']' 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 353415 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353415 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353415' 00:21:47.626 killing process with pid 353415 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 353415 00:21:47.626 Received shutdown signal, test time was about 10.000000 seconds 00:21:47.626 00:21:47.626 Latency(us) 00:21:47.626 [2024-11-19T08:39:34.374Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.626 [2024-11-19T08:39:34.374Z] =================================================================================================================== 00:21:47.626 [2024-11-19T08:39:34.374Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 353415 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:47.626 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:47.627 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:47.627 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:47.627 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=353754 00:21:47.627 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:47.627 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 353754 /var/tmp/bdevperf.sock 00:21:47.627 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:47.627 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 353754 ']' 00:21:47.627 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:47.627 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.627 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:47.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:47.627 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.627 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.887 [2024-11-19 09:39:34.397401] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:47.887 [2024-11-19 09:39:34.397473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353754 ] 00:21:47.887 [2024-11-19 09:39:34.482274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.887 [2024-11-19 09:39:34.510965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.456 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.456 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:48.456 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:21:48.716 [2024-11-19 09:39:35.348848] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:21:48.716 [2024-11-19 09:39:35.348874] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:48.716 request: 00:21:48.716 { 00:21:48.716 "name": "key0", 00:21:48.716 "path": "", 00:21:48.716 "method": "keyring_file_add_key", 00:21:48.716 "req_id": 1 00:21:48.716 } 00:21:48.716 Got JSON-RPC error response 00:21:48.716 response: 00:21:48.716 { 00:21:48.716 "code": -1, 00:21:48.716 "message": "Operation not permitted" 00:21:48.716 } 00:21:48.716 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:48.977 [2024-11-19 09:39:35.537404] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.977 [2024-11-19 09:39:35.537425] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:48.977 request: 00:21:48.977 { 00:21:48.977 "name": "TLSTEST", 00:21:48.977 "trtype": "tcp", 00:21:48.977 "traddr": "10.0.0.2", 00:21:48.977 "adrfam": "ipv4", 00:21:48.977 "trsvcid": "4420", 00:21:48.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.977 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:48.977 "prchk_reftag": false, 00:21:48.977 "prchk_guard": false, 00:21:48.977 "hdgst": false, 00:21:48.977 "ddgst": false, 00:21:48.977 "psk": "key0", 00:21:48.977 "allow_unrecognized_csi": false, 00:21:48.977 "method": "bdev_nvme_attach_controller", 00:21:48.977 "req_id": 1 00:21:48.977 } 00:21:48.977 Got JSON-RPC error response 00:21:48.977 response: 00:21:48.977 { 00:21:48.977 "code": -126, 00:21:48.977 "message": "Required key not available" 00:21:48.977 } 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 353754 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 353754 ']' 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 353754 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353754 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353754' 00:21:48.977 killing process with pid 353754 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 353754 00:21:48.977 Received shutdown signal, test time was about 10.000000 seconds 00:21:48.977 00:21:48.977 Latency(us) 00:21:48.977 [2024-11-19T08:39:35.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.977 [2024-11-19T08:39:35.725Z] =================================================================================================================== 00:21:48.977 [2024-11-19T08:39:35.725Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 353754 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 347272 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 347272 ']' 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 347272 00:21:48.977 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 347272 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 347272' 00:21:49.238 killing process with pid 347272 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 347272 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 347272 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.OFt8sCkxfC 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.OFt8sCkxfC 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=354111 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 354111 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354111 ']' 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.238 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.498 [2024-11-19 09:39:36.007799] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:49.498 [2024-11-19 09:39:36.007855] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.498 [2024-11-19 09:39:36.096056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.498 [2024-11-19 09:39:36.125993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.498 [2024-11-19 09:39:36.126026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.498 [2024-11-19 09:39:36.126033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.498 [2024-11-19 09:39:36.126038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.498 [2024-11-19 09:39:36.126041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.498 [2024-11-19 09:39:36.126520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.069 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.069 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:50.069 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.069 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.069 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.328 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.329 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.OFt8sCkxfC 00:21:50.329 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OFt8sCkxfC 00:21:50.329 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:50.329 [2024-11-19 09:39:36.997944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.329 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:50.588 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:50.848 [2024-11-19 09:39:37.354818] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:50.848 [2024-11-19 09:39:37.355010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.848 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:50.848 malloc0 00:21:50.848 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:51.108 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OFt8sCkxfC 00:21:51.369 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:51.369 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OFt8sCkxfC 00:21:51.369 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:51.369 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:51.369 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:51.369 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OFt8sCkxfC 00:21:51.369 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:51.369 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:51.369 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=354479 00:21:51.369 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:51.369 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 354479 /var/tmp/bdevperf.sock 00:21:51.369 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354479 ']' 00:21:51.369 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:51.369 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.369 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:51.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:51.369 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.369 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.629 [2024-11-19 09:39:38.148244] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:51.629 [2024-11-19 09:39:38.148295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354479 ] 00:21:51.629 [2024-11-19 09:39:38.231040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.629 [2024-11-19 09:39:38.259921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.571 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.572 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:52.572 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OFt8sCkxfC 00:21:52.572 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:52.572 [2024-11-19 09:39:39.290347] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:52.835 TLSTESTn1 00:21:52.835 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:52.835 Running I/O for 10 seconds... 00:21:55.158 3603.00 IOPS, 14.07 MiB/s [2024-11-19T08:39:42.847Z] 4639.50 IOPS, 18.12 MiB/s [2024-11-19T08:39:43.788Z] 4540.00 IOPS, 17.73 MiB/s [2024-11-19T08:39:44.730Z] 4324.00 IOPS, 16.89 MiB/s [2024-11-19T08:39:45.671Z] 4616.40 IOPS, 18.03 MiB/s [2024-11-19T08:39:46.613Z] 4224.50 IOPS, 16.50 MiB/s [2024-11-19T08:39:47.556Z] 3892.43 IOPS, 15.20 MiB/s [2024-11-19T08:39:48.942Z] 3731.12 IOPS, 14.57 MiB/s [2024-11-19T08:39:49.513Z] 3944.67 IOPS, 15.41 MiB/s [2024-11-19T08:39:49.773Z] 3864.20 IOPS, 15.09 MiB/s 00:22:03.025 Latency(us) 00:22:03.025 [2024-11-19T08:39:49.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.025 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:03.025 Verification LBA range: start 0x0 length 0x2000 00:22:03.025 TLSTESTn1 : 10.09 3841.67 15.01 0.00 0.00 33189.81 5925.55 91313.49 00:22:03.025 [2024-11-19T08:39:49.773Z] =================================================================================================================== 00:22:03.025 [2024-11-19T08:39:49.773Z] Total : 3841.67 15.01 0.00 0.00 33189.81 5925.55 91313.49 00:22:03.025 { 00:22:03.025 "results": [ 00:22:03.025 { 00:22:03.025 "job": "TLSTESTn1", 00:22:03.025 "core_mask": "0x4", 00:22:03.025 "workload": "verify", 00:22:03.025 "status": "finished", 00:22:03.025 "verify_range": { 00:22:03.025 "start": 0, 00:22:03.025 "length": 8192 00:22:03.025 }, 00:22:03.025 "queue_depth": 128, 00:22:03.025 "io_size": 4096, 00:22:03.025 "runtime": 10.091963, 00:22:03.025 "iops": 3841.670842431745, 00:22:03.025 "mibps": 15.006526728249003, 00:22:03.025 "io_failed": 0, 00:22:03.025 "io_timeout": 0, 00:22:03.025 "avg_latency_us": 33189.81052428854, 00:22:03.025 "min_latency_us": 5925.546666666667, 00:22:03.025 "max_latency_us": 91313.49333333333 00:22:03.025 } 00:22:03.025 ], 00:22:03.025 "core_count": 1 00:22:03.025 } 00:22:03.025 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:03.025 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 354479 00:22:03.025 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354479 ']' 00:22:03.025 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354479 00:22:03.025 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:03.025 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.025 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354479 00:22:03.026 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:03.026 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:03.026 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354479' 00:22:03.026 killing process with pid 354479 00:22:03.026 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354479 00:22:03.026 Received shutdown signal, test time was about 10.000000 seconds 00:22:03.026 00:22:03.026 Latency(us) 00:22:03.026 [2024-11-19T08:39:49.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.026 [2024-11-19T08:39:49.774Z] =================================================================================================================== 00:22:03.026 [2024-11-19T08:39:49.774Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:03.026 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354479 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.OFt8sCkxfC 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OFt8sCkxfC 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OFt8sCkxfC 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OFt8sCkxfC 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OFt8sCkxfC 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=356817 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 356817 /var/tmp/bdevperf.sock 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 356817 ']' 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.287 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.287 [2024-11-19 09:39:49.858948] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:03.287 [2024-11-19 09:39:49.859002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid356817 ] 00:22:03.287 [2024-11-19 09:39:49.943307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.287 [2024-11-19 09:39:49.971416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.230 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.230 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:04.230 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OFt8sCkxfC 00:22:04.230 [2024-11-19 09:39:50.807039] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.OFt8sCkxfC': 0100666 00:22:04.230 [2024-11-19 09:39:50.807067] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:04.230 request: 00:22:04.230 { 00:22:04.230 "name": "key0", 00:22:04.230 "path": "/tmp/tmp.OFt8sCkxfC", 00:22:04.230 "method": "keyring_file_add_key", 00:22:04.230 "req_id": 1 00:22:04.230 } 00:22:04.230 Got JSON-RPC error response 00:22:04.230 response: 00:22:04.230 { 00:22:04.230 "code": -1, 00:22:04.230 "message": "Operation not permitted" 00:22:04.230 } 00:22:04.230 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:04.490 [2024-11-19 09:39:50.995581] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.490 [2024-11-19 09:39:50.995602] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:04.490 request: 00:22:04.490 { 00:22:04.490 "name": "TLSTEST", 00:22:04.490 "trtype": "tcp", 00:22:04.490 "traddr": "10.0.0.2", 00:22:04.490 "adrfam": "ipv4", 00:22:04.490 "trsvcid": "4420", 00:22:04.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.490 "prchk_reftag": false, 00:22:04.490 "prchk_guard": false, 00:22:04.490 "hdgst": false, 00:22:04.490 "ddgst": false, 00:22:04.490 "psk": "key0", 00:22:04.490 "allow_unrecognized_csi": false, 00:22:04.490 "method": "bdev_nvme_attach_controller", 00:22:04.490 "req_id": 1 00:22:04.490 } 00:22:04.490 Got JSON-RPC error response 00:22:04.490 response: 00:22:04.490 { 00:22:04.490 "code": -126, 00:22:04.490 "message": "Required key not available" 00:22:04.490 } 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 356817 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 356817 ']' 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 356817 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 356817 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 356817' 00:22:04.490 killing process with pid 356817 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 356817 00:22:04.490 Received shutdown signal, test time was about 10.000000 seconds 00:22:04.490 00:22:04.490 Latency(us) 00:22:04.490 [2024-11-19T08:39:51.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.490 [2024-11-19T08:39:51.238Z] =================================================================================================================== 00:22:04.490 [2024-11-19T08:39:51.238Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 356817 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 354111 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354111 ']' 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354111 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.490 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354111 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354111' 00:22:04.750 killing process with pid 354111 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354111 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354111 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=357162 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 357162 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357162 ']' 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.750 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.750 [2024-11-19 09:39:51.419620] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:04.750 [2024-11-19 09:39:51.419675] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.009 [2024-11-19 09:39:51.510370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.009 [2024-11-19 09:39:51.538879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.009 [2024-11-19 09:39:51.538906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.009 [2024-11-19 09:39:51.538912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.009 [2024-11-19 09:39:51.538916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.009 [2024-11-19 09:39:51.538921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.009 [2024-11-19 09:39:51.539353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.579 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.579 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:05.579 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:05.579 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:05.579 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.579 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.579 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.OFt8sCkxfC 00:22:05.579 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:05.579 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.OFt8sCkxfC 00:22:05.579 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:22:05.579 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.579 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:22:05.579 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.579 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.OFt8sCkxfC 00:22:05.579 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OFt8sCkxfC 00:22:05.579 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:05.839 [2024-11-19 09:39:52.398191] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.840 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:05.840 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:06.100 [2024-11-19 09:39:52.718967] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:06.100 [2024-11-19 09:39:52.719171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.100 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:06.360 malloc0 00:22:06.360 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:06.360 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OFt8sCkxfC 00:22:06.620 [2024-11-19 09:39:53.205962] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.OFt8sCkxfC': 0100666 00:22:06.620 [2024-11-19 09:39:53.205982] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:06.620 request: 00:22:06.620 { 00:22:06.620 "name": "key0", 00:22:06.620 "path": "/tmp/tmp.OFt8sCkxfC", 00:22:06.620 "method": "keyring_file_add_key", 00:22:06.620 "req_id": 1 00:22:06.620 } 00:22:06.620 Got JSON-RPC error response 00:22:06.620 response: 00:22:06.620 { 00:22:06.620 "code": -1, 00:22:06.620 "message": "Operation not permitted" 00:22:06.620 } 00:22:06.620 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:06.880 [2024-11-19 09:39:53.374400] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:06.880 [2024-11-19 09:39:53.374429] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:06.880 request: 00:22:06.880 { 00:22:06.880 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.880 "host": "nqn.2016-06.io.spdk:host1", 00:22:06.880 "psk": "key0", 00:22:06.880 "method": "nvmf_subsystem_add_host", 00:22:06.880 "req_id": 1 00:22:06.880 } 00:22:06.880 Got JSON-RPC error response 00:22:06.880 response: 00:22:06.880 { 00:22:06.880 "code": -32603, 00:22:06.880 "message": "Internal error" 00:22:06.880 } 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 357162 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357162 ']' 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357162 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357162 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357162' 00:22:06.880 killing process with pid 357162 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357162 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357162 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.OFt8sCkxfC 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=357541 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 357541 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357541 ']' 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.880 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:07.141 [2024-11-19 09:39:53.627319] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:07.141 [2024-11-19 09:39:53.627376] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.141 [2024-11-19 09:39:53.717205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.141 [2024-11-19 09:39:53.746856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.141 [2024-11-19 09:39:53.746883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.141 [2024-11-19 09:39:53.746889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.141 [2024-11-19 09:39:53.746893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.141 [2024-11-19 09:39:53.746898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.141 [2024-11-19 09:39:53.747336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.712 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.712 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:07.712 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:07.712 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:07.712 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:07.712 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.712 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.OFt8sCkxfC 00:22:07.712 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OFt8sCkxfC 00:22:07.712 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:07.973 [2024-11-19 09:39:54.610587] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.973 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:08.233 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:08.233 [2024-11-19 09:39:54.947411] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:08.233 [2024-11-19 09:39:54.947606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.233 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:08.493 malloc0 00:22:08.493 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:08.752 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OFt8sCkxfC 00:22:08.752 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:09.011 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:09.011 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=357917 00:22:09.011 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:09.011 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 357917 /var/tmp/bdevperf.sock 00:22:09.011 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357917 ']' 00:22:09.011 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.011 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.011 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.012 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.012 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.012 [2024-11-19 09:39:55.645529] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:09.012 [2024-11-19 09:39:55.645581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357917 ] 00:22:09.012 [2024-11-19 09:39:55.730557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.271 [2024-11-19 09:39:55.759707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.271 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.271 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:09.271 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OFt8sCkxfC 00:22:09.271 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:09.531 [2024-11-19 09:39:56.157004] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:09.531 TLSTESTn1 00:22:09.531 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:09.791 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:22:09.792 "subsystems": [ 00:22:09.792 { 00:22:09.792 "subsystem": "keyring", 00:22:09.792 "config": [ 00:22:09.792 { 00:22:09.792 "method": "keyring_file_add_key", 00:22:09.792 "params": { 00:22:09.792 "name": "key0", 00:22:09.792 "path": "/tmp/tmp.OFt8sCkxfC" 00:22:09.792 } 00:22:09.792 } 00:22:09.792 ] 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "subsystem": "iobuf", 00:22:09.792 "config": [ 00:22:09.792 { 00:22:09.792 "method": "iobuf_set_options", 00:22:09.792 "params": { 00:22:09.792 "small_pool_count": 8192, 00:22:09.792 "large_pool_count": 1024, 00:22:09.792 "small_bufsize": 8192, 00:22:09.792 "large_bufsize": 135168, 00:22:09.792 "enable_numa": false 00:22:09.792 } 00:22:09.792 } 00:22:09.792 ] 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "subsystem": "sock", 00:22:09.792 "config": [ 00:22:09.792 { 00:22:09.792 "method": "sock_set_default_impl", 00:22:09.792 "params": { 00:22:09.792 "impl_name": "posix" 00:22:09.792 } 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "method": "sock_impl_set_options", 00:22:09.792 "params": { 00:22:09.792 "impl_name": "ssl", 00:22:09.792 "recv_buf_size": 4096, 00:22:09.792 "send_buf_size": 4096, 00:22:09.792 "enable_recv_pipe": true, 00:22:09.792 "enable_quickack": false, 00:22:09.792 "enable_placement_id": 0, 00:22:09.792 "enable_zerocopy_send_server": true, 00:22:09.792 "enable_zerocopy_send_client": false, 00:22:09.792 "zerocopy_threshold": 0, 00:22:09.792 "tls_version": 0, 00:22:09.792 "enable_ktls": false 00:22:09.792 } 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "method": "sock_impl_set_options", 00:22:09.792 "params": { 00:22:09.792 "impl_name": "posix", 00:22:09.792 "recv_buf_size": 2097152, 00:22:09.792 "send_buf_size": 2097152, 00:22:09.792 "enable_recv_pipe": true, 00:22:09.792 "enable_quickack": false, 00:22:09.792 "enable_placement_id": 0, 00:22:09.792 "enable_zerocopy_send_server": true, 00:22:09.792 "enable_zerocopy_send_client": false, 00:22:09.792 "zerocopy_threshold": 0, 00:22:09.792 "tls_version": 0, 00:22:09.792 "enable_ktls": false 00:22:09.792 } 00:22:09.792 } 00:22:09.792 ] 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "subsystem": "vmd", 00:22:09.792 "config": [] 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "subsystem": "accel", 00:22:09.792 "config": [ 00:22:09.792 { 00:22:09.792 "method": "accel_set_options", 00:22:09.792 "params": { 00:22:09.792 "small_cache_size": 128, 00:22:09.792 "large_cache_size": 16, 00:22:09.792 "task_count": 2048, 00:22:09.792 "sequence_count": 2048, 00:22:09.792 "buf_count": 2048 00:22:09.792 } 00:22:09.792 } 00:22:09.792 ] 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "subsystem": "bdev", 00:22:09.792 "config": [ 00:22:09.792 { 00:22:09.792 "method": "bdev_set_options", 00:22:09.792 "params": { 00:22:09.792 "bdev_io_pool_size": 65535, 00:22:09.792 "bdev_io_cache_size": 256, 00:22:09.792 "bdev_auto_examine": true, 00:22:09.792 "iobuf_small_cache_size": 128, 00:22:09.792 "iobuf_large_cache_size": 16 00:22:09.792 } 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "method": "bdev_raid_set_options", 00:22:09.792 "params": { 00:22:09.792 "process_window_size_kb": 1024, 00:22:09.792 "process_max_bandwidth_mb_sec": 0 00:22:09.792 } 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "method": "bdev_iscsi_set_options", 00:22:09.792 "params": { 00:22:09.792 "timeout_sec": 30 00:22:09.792 } 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "method": "bdev_nvme_set_options", 00:22:09.792 "params": { 00:22:09.792 "action_on_timeout": "none", 00:22:09.792 "timeout_us": 0, 00:22:09.792 "timeout_admin_us": 0, 00:22:09.792 "keep_alive_timeout_ms": 10000, 00:22:09.792 "arbitration_burst": 0, 00:22:09.792 "low_priority_weight": 0, 00:22:09.792 "medium_priority_weight": 0, 00:22:09.792 "high_priority_weight": 0, 00:22:09.792 "nvme_adminq_poll_period_us": 10000, 00:22:09.792 "nvme_ioq_poll_period_us": 0, 00:22:09.792 "io_queue_requests": 0, 00:22:09.792 "delay_cmd_submit": true, 00:22:09.792 "transport_retry_count": 4, 00:22:09.792 "bdev_retry_count": 3, 00:22:09.792 "transport_ack_timeout": 0, 00:22:09.792 "ctrlr_loss_timeout_sec": 0, 00:22:09.792 "reconnect_delay_sec": 0, 00:22:09.792 "fast_io_fail_timeout_sec": 0, 00:22:09.792 "disable_auto_failback": false, 00:22:09.792 "generate_uuids": false, 00:22:09.792 "transport_tos": 0, 00:22:09.792 "nvme_error_stat": false, 00:22:09.792 "rdma_srq_size": 0, 00:22:09.792 "io_path_stat": false, 00:22:09.792 "allow_accel_sequence": false, 00:22:09.792 "rdma_max_cq_size": 0, 00:22:09.792 "rdma_cm_event_timeout_ms": 0, 00:22:09.792 "dhchap_digests": [ 00:22:09.792 "sha256", 00:22:09.792 "sha384", 00:22:09.792 "sha512" 00:22:09.792 ], 00:22:09.792 "dhchap_dhgroups": [ 00:22:09.792 "null", 00:22:09.792 "ffdhe2048", 00:22:09.792 "ffdhe3072", 00:22:09.792 "ffdhe4096", 00:22:09.792 "ffdhe6144", 00:22:09.792 "ffdhe8192" 00:22:09.792 ] 00:22:09.792 } 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "method": "bdev_nvme_set_hotplug", 00:22:09.792 "params": { 00:22:09.792 "period_us": 100000, 00:22:09.792 "enable": false 00:22:09.792 } 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "method": "bdev_malloc_create", 00:22:09.792 "params": { 00:22:09.792 "name": "malloc0", 00:22:09.792 "num_blocks": 8192, 00:22:09.792 "block_size": 4096, 00:22:09.792 "physical_block_size": 4096, 00:22:09.792 "uuid": "c61ce8ec-ac62-4a1e-8a2c-e5f0ddf61a4e", 00:22:09.792 "optimal_io_boundary": 0, 00:22:09.792 "md_size": 0, 00:22:09.792 "dif_type": 0, 00:22:09.792 "dif_is_head_of_md": false, 00:22:09.792 "dif_pi_format": 0 00:22:09.792 } 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "method": "bdev_wait_for_examine" 00:22:09.792 } 00:22:09.792 ] 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "subsystem": "nbd", 00:22:09.792 "config": [] 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "subsystem": "scheduler", 00:22:09.792 "config": [ 00:22:09.792 { 00:22:09.792 "method": "framework_set_scheduler", 00:22:09.792 "params": { 00:22:09.792 "name": "static" 00:22:09.792 } 00:22:09.792 } 00:22:09.792 ] 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "subsystem": "nvmf", 00:22:09.792 "config": [ 00:22:09.792 { 00:22:09.792 "method": "nvmf_set_config", 00:22:09.792 "params": { 00:22:09.792 "discovery_filter": "match_any", 00:22:09.792 "admin_cmd_passthru": { 00:22:09.792 "identify_ctrlr": false 00:22:09.792 }, 00:22:09.792 "dhchap_digests": [ 00:22:09.792 "sha256", 00:22:09.792 "sha384", 00:22:09.792 "sha512" 00:22:09.792 ], 00:22:09.792 "dhchap_dhgroups": [ 00:22:09.792 "null", 00:22:09.792 "ffdhe2048", 00:22:09.792 "ffdhe3072", 00:22:09.792 "ffdhe4096", 00:22:09.792 "ffdhe6144", 00:22:09.792 "ffdhe8192" 00:22:09.792 ] 00:22:09.792 } 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "method": "nvmf_set_max_subsystems", 00:22:09.792 "params": { 00:22:09.792 "max_subsystems": 1024 00:22:09.792 } 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "method": "nvmf_set_crdt", 00:22:09.792 "params": { 00:22:09.792 "crdt1": 0, 00:22:09.792 "crdt2": 0, 00:22:09.792 "crdt3": 0 00:22:09.792 } 00:22:09.792 }, 00:22:09.793 { 00:22:09.793 "method": "nvmf_create_transport", 00:22:09.793 "params": { 00:22:09.793 "trtype": "TCP", 00:22:09.793 "max_queue_depth": 128, 00:22:09.793 "max_io_qpairs_per_ctrlr": 127, 00:22:09.793 "in_capsule_data_size": 4096, 00:22:09.793 "max_io_size": 131072, 00:22:09.793 "io_unit_size": 131072, 00:22:09.793 "max_aq_depth": 128, 00:22:09.793 "num_shared_buffers": 511, 00:22:09.793 "buf_cache_size": 4294967295, 00:22:09.793 "dif_insert_or_strip": false, 00:22:09.793 "zcopy": false, 00:22:09.793 "c2h_success": false, 00:22:09.793 "sock_priority": 0, 00:22:09.793 "abort_timeout_sec": 1, 00:22:09.793 "ack_timeout": 0, 00:22:09.793 "data_wr_pool_size": 0 00:22:09.793 } 00:22:09.793 }, 00:22:09.793 { 00:22:09.793 "method": "nvmf_create_subsystem", 00:22:09.793 "params": { 00:22:09.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.793 "allow_any_host": false, 00:22:09.793 "serial_number": "SPDK00000000000001", 00:22:09.793 "model_number": "SPDK bdev Controller", 00:22:09.793 "max_namespaces": 10, 00:22:09.793 "min_cntlid": 1, 00:22:09.793 "max_cntlid": 65519, 00:22:09.793 "ana_reporting": false 00:22:09.793 } 00:22:09.793 }, 00:22:09.793 { 00:22:09.793 "method": "nvmf_subsystem_add_host", 00:22:09.793 "params": { 00:22:09.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.793 "host": "nqn.2016-06.io.spdk:host1", 00:22:09.793 "psk": "key0" 00:22:09.793 } 00:22:09.793 }, 00:22:09.793 { 00:22:09.793 "method": "nvmf_subsystem_add_ns", 00:22:09.793 "params": { 00:22:09.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.793 "namespace": { 00:22:09.793 "nsid": 1, 00:22:09.793 "bdev_name": "malloc0", 00:22:09.793 "nguid": "C61CE8ECAC624A1E8A2CE5F0DDF61A4E", 00:22:09.793 "uuid": "c61ce8ec-ac62-4a1e-8a2c-e5f0ddf61a4e", 00:22:09.793 "no_auto_visible": false 00:22:09.793 } 00:22:09.793 } 00:22:09.793 }, 00:22:09.793 { 00:22:09.793 "method": "nvmf_subsystem_add_listener", 00:22:09.793 "params": { 00:22:09.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.793 "listen_address": { 00:22:09.793 "trtype": "TCP", 00:22:09.793 "adrfam": "IPv4", 00:22:09.793 "traddr": "10.0.0.2", 00:22:09.793 "trsvcid": "4420" 00:22:09.793 }, 00:22:09.793 "secure_channel": true 00:22:09.793 } 00:22:09.793 } 00:22:09.793 ] 00:22:09.793 } 00:22:09.793 ] 00:22:09.793 }' 00:22:09.793 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:10.055 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:22:10.055 "subsystems": [ 00:22:10.055 { 00:22:10.055 "subsystem": "keyring", 00:22:10.055 "config": [ 00:22:10.055 { 00:22:10.055 "method": "keyring_file_add_key", 00:22:10.055 "params": { 00:22:10.055 "name": "key0", 00:22:10.055 "path": "/tmp/tmp.OFt8sCkxfC" 00:22:10.055 } 00:22:10.055 } 00:22:10.055 ] 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "subsystem": "iobuf", 00:22:10.055 "config": [ 00:22:10.055 { 00:22:10.055 "method": "iobuf_set_options", 00:22:10.055 "params": { 00:22:10.055 "small_pool_count": 8192, 00:22:10.055 "large_pool_count": 1024, 00:22:10.055 "small_bufsize": 8192, 00:22:10.055 "large_bufsize": 135168, 00:22:10.055 "enable_numa": false 00:22:10.055 } 00:22:10.055 } 00:22:10.055 ] 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "subsystem": "sock", 00:22:10.055 "config": [ 00:22:10.055 { 00:22:10.055 "method": "sock_set_default_impl", 00:22:10.055 "params": { 00:22:10.055 "impl_name": "posix" 00:22:10.055 } 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "method": "sock_impl_set_options", 00:22:10.055 "params": { 00:22:10.055 "impl_name": "ssl", 00:22:10.055 "recv_buf_size": 4096, 00:22:10.055 "send_buf_size": 4096, 00:22:10.055 "enable_recv_pipe": true, 00:22:10.055 "enable_quickack": false, 00:22:10.055 "enable_placement_id": 0, 00:22:10.055 "enable_zerocopy_send_server": true, 00:22:10.055 "enable_zerocopy_send_client": false, 00:22:10.055 "zerocopy_threshold": 0, 00:22:10.055 "tls_version": 0, 00:22:10.055 "enable_ktls": false 00:22:10.055 } 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "method": "sock_impl_set_options", 00:22:10.055 "params": { 00:22:10.055 "impl_name": "posix", 00:22:10.055 "recv_buf_size": 2097152, 00:22:10.055 "send_buf_size": 2097152, 00:22:10.055 "enable_recv_pipe": true, 00:22:10.055 "enable_quickack": false, 00:22:10.055 "enable_placement_id": 0, 00:22:10.055 "enable_zerocopy_send_server": true, 00:22:10.055 "enable_zerocopy_send_client": false, 00:22:10.055 "zerocopy_threshold": 0, 00:22:10.055 "tls_version": 0, 00:22:10.055 "enable_ktls": false 00:22:10.055 } 00:22:10.055 } 00:22:10.055 ] 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "subsystem": "vmd", 00:22:10.055 "config": [] 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "subsystem": "accel", 00:22:10.055 "config": [ 00:22:10.055 { 00:22:10.055 "method": "accel_set_options", 00:22:10.055 "params": { 00:22:10.055 "small_cache_size": 128, 00:22:10.055 "large_cache_size": 16, 00:22:10.055 "task_count": 2048, 00:22:10.055 "sequence_count": 2048, 00:22:10.055 "buf_count": 2048 00:22:10.055 } 00:22:10.055 } 00:22:10.055 ] 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "subsystem": "bdev", 00:22:10.055 "config": [ 00:22:10.055 { 00:22:10.055 "method": "bdev_set_options", 00:22:10.055 "params": { 00:22:10.055 "bdev_io_pool_size": 65535, 00:22:10.055 "bdev_io_cache_size": 256, 00:22:10.055 "bdev_auto_examine": true, 00:22:10.055 "iobuf_small_cache_size": 128, 00:22:10.055 "iobuf_large_cache_size": 16 00:22:10.055 } 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "method": "bdev_raid_set_options", 00:22:10.055 "params": { 00:22:10.055 "process_window_size_kb": 1024, 00:22:10.055 "process_max_bandwidth_mb_sec": 0 00:22:10.055 } 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "method": "bdev_iscsi_set_options", 00:22:10.055 "params": { 00:22:10.055 "timeout_sec": 30 00:22:10.055 } 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "method": "bdev_nvme_set_options", 00:22:10.055 "params": { 00:22:10.055 "action_on_timeout": "none", 00:22:10.055 "timeout_us": 0, 00:22:10.055 "timeout_admin_us": 0, 00:22:10.055 "keep_alive_timeout_ms": 10000, 00:22:10.055 "arbitration_burst": 0, 00:22:10.055 "low_priority_weight": 0, 00:22:10.055 "medium_priority_weight": 0, 00:22:10.055 "high_priority_weight": 0, 00:22:10.055 "nvme_adminq_poll_period_us": 10000, 00:22:10.055 "nvme_ioq_poll_period_us": 0, 00:22:10.055 "io_queue_requests": 512, 00:22:10.055 "delay_cmd_submit": true, 00:22:10.055 "transport_retry_count": 4, 00:22:10.055 "bdev_retry_count": 3, 00:22:10.055 "transport_ack_timeout": 0, 00:22:10.055 "ctrlr_loss_timeout_sec": 0, 00:22:10.055 "reconnect_delay_sec": 0, 00:22:10.055 "fast_io_fail_timeout_sec": 0, 00:22:10.055 "disable_auto_failback": false, 00:22:10.055 "generate_uuids": false, 00:22:10.055 "transport_tos": 0, 00:22:10.055 "nvme_error_stat": false, 00:22:10.055 "rdma_srq_size": 0, 00:22:10.055 "io_path_stat": false, 00:22:10.055 "allow_accel_sequence": false, 00:22:10.055 "rdma_max_cq_size": 0, 00:22:10.055 "rdma_cm_event_timeout_ms": 0, 00:22:10.055 "dhchap_digests": [ 00:22:10.055 "sha256", 00:22:10.055 "sha384", 00:22:10.055 "sha512" 00:22:10.055 ], 00:22:10.055 "dhchap_dhgroups": [ 00:22:10.055 "null", 00:22:10.055 "ffdhe2048", 00:22:10.055 "ffdhe3072", 00:22:10.055 "ffdhe4096", 00:22:10.055 "ffdhe6144", 00:22:10.055 "ffdhe8192" 00:22:10.055 ] 00:22:10.055 } 00:22:10.055 }, 00:22:10.055 { 00:22:10.056 "method": "bdev_nvme_attach_controller", 00:22:10.056 "params": { 00:22:10.056 "name": "TLSTEST", 00:22:10.056 "trtype": "TCP", 00:22:10.056 "adrfam": "IPv4", 00:22:10.056 "traddr": "10.0.0.2", 00:22:10.056 "trsvcid": "4420", 00:22:10.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.056 "prchk_reftag": false, 00:22:10.056 "prchk_guard": false, 00:22:10.056 "ctrlr_loss_timeout_sec": 0, 00:22:10.056 "reconnect_delay_sec": 0, 00:22:10.056 "fast_io_fail_timeout_sec": 0, 00:22:10.056 "psk": "key0", 00:22:10.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.056 "hdgst": false, 00:22:10.056 "ddgst": false, 00:22:10.056 "multipath": "multipath" 00:22:10.056 } 00:22:10.056 }, 00:22:10.056 { 00:22:10.056 "method": "bdev_nvme_set_hotplug", 00:22:10.056 "params": { 00:22:10.056 "period_us": 100000, 00:22:10.056 "enable": false 00:22:10.056 } 00:22:10.056 }, 00:22:10.056 { 00:22:10.056 "method": "bdev_wait_for_examine" 00:22:10.056 } 00:22:10.056 ] 00:22:10.056 }, 00:22:10.056 { 00:22:10.056 "subsystem": "nbd", 00:22:10.056 "config": [] 00:22:10.056 } 00:22:10.056 ] 00:22:10.056 }' 00:22:10.056 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 357917 00:22:10.056 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357917 ']' 00:22:10.056 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357917 00:22:10.056 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:10.056 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.056 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357917 00:22:10.317 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:10.317 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:10.317 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357917' 00:22:10.317 killing process with pid 357917 00:22:10.317 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357917 00:22:10.317 Received shutdown signal, test time was about 10.000000 seconds 00:22:10.317 00:22:10.317 Latency(us) 00:22:10.317 [2024-11-19T08:39:57.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.317 [2024-11-19T08:39:57.065Z] =================================================================================================================== 00:22:10.317 [2024-11-19T08:39:57.065Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:10.317 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357917 00:22:10.317 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 357541 00:22:10.317 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357541 ']' 00:22:10.317 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357541 00:22:10.317 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:10.317 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.317 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357541 00:22:10.317 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:10.317 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:10.317 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357541' 00:22:10.317 killing process with pid 357541 00:22:10.317 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357541 00:22:10.317 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357541 00:22:10.577 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:10.577 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:10.577 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:10.577 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.577 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:22:10.577 "subsystems": [ 00:22:10.577 { 00:22:10.577 "subsystem": "keyring", 00:22:10.577 "config": [ 00:22:10.577 { 00:22:10.577 "method": "keyring_file_add_key", 00:22:10.577 "params": { 00:22:10.577 "name": "key0", 00:22:10.577 "path": "/tmp/tmp.OFt8sCkxfC" 00:22:10.577 } 00:22:10.577 } 00:22:10.577 ] 00:22:10.577 }, 00:22:10.578 { 00:22:10.578 "subsystem": "iobuf", 00:22:10.578 "config": [ 00:22:10.578 { 00:22:10.578 "method": "iobuf_set_options", 00:22:10.578 "params": { 00:22:10.578 "small_pool_count": 8192, 00:22:10.578 "large_pool_count": 1024, 00:22:10.578 "small_bufsize": 8192, 00:22:10.578 "large_bufsize": 135168, 00:22:10.578 "enable_numa": false 00:22:10.578 } 00:22:10.578 } 00:22:10.578 ] 00:22:10.578 }, 00:22:10.578 { 00:22:10.578 "subsystem": "sock", 00:22:10.578 "config": [ 00:22:10.578 { 00:22:10.578 "method": "sock_set_default_impl", 00:22:10.578 "params": { 00:22:10.578 "impl_name": "posix" 00:22:10.578 } 00:22:10.578 }, 00:22:10.578 { 00:22:10.578 "method": "sock_impl_set_options", 00:22:10.578 "params": { 00:22:10.578 "impl_name": "ssl", 00:22:10.578 "recv_buf_size": 4096, 00:22:10.578 "send_buf_size": 4096, 00:22:10.578 "enable_recv_pipe": true, 00:22:10.578 "enable_quickack": false, 00:22:10.578 "enable_placement_id": 0, 00:22:10.578 "enable_zerocopy_send_server": true, 00:22:10.578 "enable_zerocopy_send_client": false, 00:22:10.578 "zerocopy_threshold": 0, 00:22:10.578 "tls_version": 0, 00:22:10.578 "enable_ktls": false 00:22:10.578 } 00:22:10.578 }, 00:22:10.578 { 00:22:10.578 "method": "sock_impl_set_options", 00:22:10.578 "params": { 00:22:10.578 "impl_name": "posix", 00:22:10.578 "recv_buf_size": 2097152, 00:22:10.578 "send_buf_size": 2097152, 00:22:10.578 "enable_recv_pipe": true, 00:22:10.578 "enable_quickack": false, 00:22:10.578 "enable_placement_id": 0, 00:22:10.578 "enable_zerocopy_send_server": true, 00:22:10.578 "enable_zerocopy_send_client": false, 00:22:10.578 "zerocopy_threshold": 0, 00:22:10.578 "tls_version": 0, 00:22:10.578 "enable_ktls": false 00:22:10.578 } 00:22:10.578 } 00:22:10.578 ] 00:22:10.578 }, 00:22:10.578 { 00:22:10.578 "subsystem": "vmd", 00:22:10.578 "config": [] 00:22:10.578 }, 00:22:10.578 { 00:22:10.578 "subsystem": "accel", 00:22:10.578 "config": [ 00:22:10.578 { 00:22:10.578 "method": "accel_set_options", 00:22:10.578 "params": { 00:22:10.578 "small_cache_size": 128, 00:22:10.578 "large_cache_size": 16, 00:22:10.578 "task_count": 2048, 00:22:10.578 "sequence_count": 2048, 00:22:10.578 "buf_count": 2048 00:22:10.578 } 00:22:10.578 } 00:22:10.578 ] 00:22:10.578 }, 00:22:10.578 { 00:22:10.578 "subsystem": "bdev", 00:22:10.578 "config": [ 00:22:10.578 { 00:22:10.578 "method": "bdev_set_options", 00:22:10.578 "params": { 00:22:10.578 "bdev_io_pool_size": 65535, 00:22:10.578 "bdev_io_cache_size": 256, 00:22:10.578 "bdev_auto_examine": true, 00:22:10.578 "iobuf_small_cache_size": 128, 00:22:10.578 "iobuf_large_cache_size": 16 00:22:10.578 } 00:22:10.578 }, 00:22:10.578 { 00:22:10.578 "method": "bdev_raid_set_options", 00:22:10.578 "params": { 00:22:10.578 "process_window_size_kb": 1024, 00:22:10.578 "process_max_bandwidth_mb_sec": 0 00:22:10.578 } 00:22:10.578 }, 00:22:10.578 { 00:22:10.578 "method": "bdev_iscsi_set_options", 00:22:10.578 "params": { 00:22:10.578 "timeout_sec": 30 00:22:10.578 } 00:22:10.578 }, 00:22:10.578 { 00:22:10.578 "method": "bdev_nvme_set_options", 00:22:10.578 "params": { 00:22:10.578 "action_on_timeout": "none", 00:22:10.578 "timeout_us": 0, 00:22:10.578 "timeout_admin_us": 0, 00:22:10.578 "keep_alive_timeout_ms": 10000, 00:22:10.578 "arbitration_burst": 0, 00:22:10.578 "low_priority_weight": 0, 00:22:10.578 "medium_priority_weight": 0, 00:22:10.578 "high_priority_weight": 0, 00:22:10.578 "nvme_adminq_poll_period_us": 10000, 00:22:10.578 "nvme_ioq_poll_period_us": 0, 00:22:10.578 "io_queue_requests": 0, 00:22:10.578 "delay_cmd_submit": true, 00:22:10.578 "transport_retry_count": 4, 00:22:10.578 "bdev_retry_count": 3, 00:22:10.578 "transport_ack_timeout": 0, 00:22:10.578 "ctrlr_loss_timeout_sec": 0, 00:22:10.578 "reconnect_delay_sec": 0, 00:22:10.578 "fast_io_fail_timeout_sec": 0, 00:22:10.578 "disable_auto_failback": false, 00:22:10.578 "generate_uuids": false, 00:22:10.578 "transport_tos": 0, 00:22:10.578 "nvme_error_stat": false, 00:22:10.578 "rdma_srq_size": 0, 00:22:10.578 "io_path_stat": false, 00:22:10.578 "allow_accel_sequence": false, 00:22:10.578 "rdma_max_cq_size": 0, 00:22:10.578 "rdma_cm_event_timeout_ms": 0, 00:22:10.578 "dhchap_digests": [ 00:22:10.578 "sha256", 00:22:10.578 "sha384", 00:22:10.578 "sha512" 00:22:10.578 ], 00:22:10.578 "dhchap_dhgroups": [ 00:22:10.578 "null", 00:22:10.578 "ffdhe2048", 00:22:10.578 "ffdhe3072", 00:22:10.578 "ffdhe4096", 00:22:10.578 "ffdhe6144", 00:22:10.578 "ffdhe8192" 00:22:10.578 ] 00:22:10.578 } 00:22:10.578 }, 00:22:10.578 { 00:22:10.578 "method": "bdev_nvme_set_hotplug", 00:22:10.578 "params": { 00:22:10.578 "period_us": 100000, 00:22:10.578 "enable": false 00:22:10.578 } 00:22:10.578 }, 00:22:10.578 { 00:22:10.578 "method": "bdev_malloc_create", 00:22:10.578 "params": { 00:22:10.578 "name": "malloc0", 00:22:10.578 "num_blocks": 8192, 00:22:10.578 "block_size": 4096, 00:22:10.578 "physical_block_size": 4096, 00:22:10.578 "uuid": "c61ce8ec-ac62-4a1e-8a2c-e5f0ddf61a4e", 00:22:10.578 "optimal_io_boundary": 0, 00:22:10.578 "md_size": 0, 00:22:10.578 "dif_type": 0, 00:22:10.578 "dif_is_head_of_md": false, 00:22:10.578 "dif_pi_format": 0 00:22:10.578 } 00:22:10.578 }, 00:22:10.578 { 00:22:10.578 "method": "bdev_wait_for_examine" 00:22:10.578 } 00:22:10.578 ] 00:22:10.578 }, 00:22:10.578 { 00:22:10.578 "subsystem": "nbd", 00:22:10.578 "config": [] 00:22:10.578 }, 00:22:10.578 { 00:22:10.578 "subsystem": "scheduler", 00:22:10.578 "config": [ 00:22:10.578 { 00:22:10.578 "method": "framework_set_scheduler", 00:22:10.578 "params": { 00:22:10.578 "name": "static" 00:22:10.578 } 00:22:10.578 } 00:22:10.578 ] 00:22:10.578 }, 00:22:10.578 { 00:22:10.578 "subsystem": "nvmf", 00:22:10.578 "config": [ 00:22:10.578 { 00:22:10.578 "method": "nvmf_set_config", 00:22:10.578 "params": { 00:22:10.578 "discovery_filter": "match_any", 00:22:10.578 "admin_cmd_passthru": { 00:22:10.578 "identify_ctrlr": false 00:22:10.578 }, 00:22:10.578 "dhchap_digests": [ 00:22:10.578 "sha256", 00:22:10.578 "sha384", 00:22:10.578 "sha512" 00:22:10.578 ], 00:22:10.578 "dhchap_dhgroups": [ 00:22:10.578 "null", 00:22:10.578 "ffdhe2048", 00:22:10.578 "ffdhe3072", 00:22:10.578 "ffdhe4096", 00:22:10.578 "ffdhe6144", 00:22:10.578 "ffdhe8192" 00:22:10.578 ] 00:22:10.578 } 00:22:10.578 }, 00:22:10.578 { 00:22:10.578 "method": "nvmf_set_max_subsystems", 00:22:10.578 "params": { 00:22:10.578 "max_subsystems": 1024 00:22:10.578 } 00:22:10.578 }, 00:22:10.578 { 00:22:10.578 "method": "nvmf_set_crdt", 00:22:10.578 "params": { 00:22:10.578 "crdt1": 0, 00:22:10.578 "crdt2": 0, 00:22:10.578 "crdt3": 0 00:22:10.579 } 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "method": "nvmf_create_transport", 00:22:10.579 "params": { 00:22:10.579 "trtype": "TCP", 00:22:10.579 "max_queue_depth": 128, 00:22:10.579 "max_io_qpairs_per_ctrlr": 127, 00:22:10.579 "in_capsule_data_size": 4096, 00:22:10.579 "max_io_size": 131072, 00:22:10.579 "io_unit_size": 131072, 00:22:10.579 "max_aq_depth": 128, 00:22:10.579 "num_shared_buffers": 511, 00:22:10.579 "buf_cache_size": 4294967295, 00:22:10.579 "dif_insert_or_strip": false, 00:22:10.579 "zcopy": false, 00:22:10.579 "c2h_success": false, 00:22:10.579 "sock_priority": 0, 00:22:10.579 "abort_timeout_sec": 1, 00:22:10.579 "ack_timeout": 0, 00:22:10.579 "data_wr_pool_size": 0 00:22:10.579 } 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "method": "nvmf_create_subsystem", 00:22:10.579 "params": { 00:22:10.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.579 "allow_any_host": false, 00:22:10.579 "serial_number": "SPDK00000000000001", 00:22:10.579 "model_number": "SPDK bdev Controller", 00:22:10.579 "max_namespaces": 10, 00:22:10.579 "min_cntlid": 1, 00:22:10.579 "max_cntlid": 65519, 00:22:10.579 "ana_reporting": false 00:22:10.579 } 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "method": "nvmf_subsystem_add_host", 00:22:10.579 "params": { 00:22:10.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.579 "host": "nqn.2016-06.io.spdk:host1", 00:22:10.579 "psk": "key0" 00:22:10.579 } 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "method": "nvmf_subsystem_add_ns", 00:22:10.579 "params": { 00:22:10.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.579 "namespace": { 00:22:10.579 "nsid": 1, 00:22:10.579 "bdev_name": "malloc0", 00:22:10.579 "nguid": "C61CE8ECAC624A1E8A2CE5F0DDF61A4E", 00:22:10.579 "uuid": "c61ce8ec-ac62-4a1e-8a2c-e5f0ddf61a4e", 00:22:10.579 "no_auto_visible": false 00:22:10.579 } 00:22:10.579 } 00:22:10.579 }, 00:22:10.579 { 00:22:10.579 "method": "nvmf_subsystem_add_listener", 00:22:10.579 "params": { 00:22:10.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.579 "listen_address": { 00:22:10.579 "trtype": "TCP", 00:22:10.579 "adrfam": "IPv4", 00:22:10.579 "traddr": "10.0.0.2", 00:22:10.579 "trsvcid": "4420" 00:22:10.579 }, 00:22:10.579 "secure_channel": true 00:22:10.579 } 00:22:10.579 } 00:22:10.579 ] 00:22:10.579 } 00:22:10.579 ] 00:22:10.579 }' 00:22:10.579 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=358255 00:22:10.579 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 358255 00:22:10.579 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:10.579 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 358255 ']' 00:22:10.579 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.579 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.579 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.579 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.579 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.579 [2024-11-19 09:39:57.169274] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:10.579 [2024-11-19 09:39:57.169332] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.579 [2024-11-19 09:39:57.260672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.579 [2024-11-19 09:39:57.290359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.579 [2024-11-19 09:39:57.290386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.579 [2024-11-19 09:39:57.290392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.579 [2024-11-19 09:39:57.290397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.579 [2024-11-19 09:39:57.290401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.579 [2024-11-19 09:39:57.290880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.840 [2024-11-19 09:39:57.483338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.840 [2024-11-19 09:39:57.515360] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:10.840 [2024-11-19 09:39:57.515555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.411 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.411 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:11.411 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:11.411 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:11.411 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.411 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.411 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=358603 00:22:11.411 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 358603 /var/tmp/bdevperf.sock 00:22:11.411 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 358603 ']' 00:22:11.411 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:11.411 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.411 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:11.411 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:11.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:11.411 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.411 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.411 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:22:11.411 "subsystems": [ 00:22:11.411 { 00:22:11.411 "subsystem": "keyring", 00:22:11.411 "config": [ 00:22:11.411 { 00:22:11.411 "method": "keyring_file_add_key", 00:22:11.411 "params": { 00:22:11.411 "name": "key0", 00:22:11.411 "path": "/tmp/tmp.OFt8sCkxfC" 00:22:11.411 } 00:22:11.411 } 00:22:11.411 ] 00:22:11.411 }, 00:22:11.411 { 00:22:11.411 "subsystem": "iobuf", 00:22:11.411 "config": [ 00:22:11.411 { 00:22:11.411 "method": "iobuf_set_options", 00:22:11.411 "params": { 00:22:11.411 "small_pool_count": 8192, 00:22:11.411 "large_pool_count": 1024, 00:22:11.411 "small_bufsize": 8192, 00:22:11.411 "large_bufsize": 135168, 00:22:11.411 "enable_numa": false 00:22:11.411 } 00:22:11.411 } 00:22:11.411 ] 00:22:11.411 }, 00:22:11.411 { 00:22:11.411 "subsystem": "sock", 00:22:11.411 "config": [ 00:22:11.411 { 00:22:11.411 "method": "sock_set_default_impl", 00:22:11.411 "params": { 00:22:11.411 "impl_name": "posix" 00:22:11.411 } 00:22:11.411 }, 00:22:11.411 { 00:22:11.411 "method": "sock_impl_set_options", 00:22:11.411 "params": { 00:22:11.411 "impl_name": "ssl", 00:22:11.411 "recv_buf_size": 4096, 00:22:11.411 "send_buf_size": 4096, 00:22:11.411 "enable_recv_pipe": true, 00:22:11.411 "enable_quickack": false, 00:22:11.411 "enable_placement_id": 0, 00:22:11.411 "enable_zerocopy_send_server": true, 00:22:11.411 "enable_zerocopy_send_client": false, 00:22:11.411 "zerocopy_threshold": 0, 00:22:11.411 "tls_version": 0, 00:22:11.411 "enable_ktls": false 00:22:11.411 } 00:22:11.411 }, 00:22:11.411 { 00:22:11.411 "method": "sock_impl_set_options", 00:22:11.411 "params": { 00:22:11.411 "impl_name": "posix", 00:22:11.412 "recv_buf_size": 2097152, 00:22:11.412 "send_buf_size": 2097152, 00:22:11.412 "enable_recv_pipe": true, 00:22:11.412 "enable_quickack": false, 00:22:11.412 "enable_placement_id": 0, 00:22:11.412 "enable_zerocopy_send_server": true, 00:22:11.412 "enable_zerocopy_send_client": false, 00:22:11.412 "zerocopy_threshold": 0, 00:22:11.412 "tls_version": 0, 00:22:11.412 "enable_ktls": false 00:22:11.412 } 00:22:11.412 } 00:22:11.412 ] 00:22:11.412 }, 00:22:11.412 { 00:22:11.412 "subsystem": "vmd", 00:22:11.412 "config": [] 00:22:11.412 }, 00:22:11.412 { 00:22:11.412 "subsystem": "accel", 00:22:11.412 "config": [ 00:22:11.412 { 00:22:11.412 "method": "accel_set_options", 00:22:11.412 "params": { 00:22:11.412 "small_cache_size": 128, 00:22:11.412 "large_cache_size": 16, 00:22:11.412 "task_count": 2048, 00:22:11.412 "sequence_count": 2048, 00:22:11.412 "buf_count": 2048 00:22:11.412 } 00:22:11.412 } 00:22:11.412 ] 00:22:11.412 }, 00:22:11.412 { 00:22:11.412 "subsystem": "bdev", 00:22:11.412 "config": [ 00:22:11.412 { 00:22:11.412 "method": "bdev_set_options", 00:22:11.412 "params": { 00:22:11.412 "bdev_io_pool_size": 65535, 00:22:11.412 "bdev_io_cache_size": 256, 00:22:11.412 "bdev_auto_examine": true, 00:22:11.412 "iobuf_small_cache_size": 128, 00:22:11.412 "iobuf_large_cache_size": 16 00:22:11.412 } 00:22:11.412 }, 00:22:11.412 { 00:22:11.412 "method": "bdev_raid_set_options", 00:22:11.412 "params": { 00:22:11.412 "process_window_size_kb": 1024, 00:22:11.412 "process_max_bandwidth_mb_sec": 0 00:22:11.412 } 00:22:11.412 }, 00:22:11.412 { 00:22:11.412 "method": "bdev_iscsi_set_options", 00:22:11.412 "params": { 00:22:11.412 "timeout_sec": 30 00:22:11.412 } 00:22:11.412 }, 00:22:11.412 { 00:22:11.412 "method": "bdev_nvme_set_options", 00:22:11.412 "params": { 00:22:11.412 "action_on_timeout": "none", 00:22:11.412 "timeout_us": 0, 00:22:11.412 "timeout_admin_us": 0, 00:22:11.412 "keep_alive_timeout_ms": 10000, 00:22:11.412 "arbitration_burst": 0, 00:22:11.412 "low_priority_weight": 0, 00:22:11.412 "medium_priority_weight": 0, 00:22:11.412 "high_priority_weight": 0, 00:22:11.412 "nvme_adminq_poll_period_us": 10000, 00:22:11.412 "nvme_ioq_poll_period_us": 0, 00:22:11.412 "io_queue_requests": 512, 00:22:11.412 "delay_cmd_submit": true, 00:22:11.412 "transport_retry_count": 4, 00:22:11.412 "bdev_retry_count": 3, 00:22:11.412 "transport_ack_timeout": 0, 00:22:11.412 "ctrlr_loss_timeout_sec": 0, 00:22:11.412 "reconnect_delay_sec": 0, 00:22:11.412 "fast_io_fail_timeout_sec": 0, 00:22:11.412 "disable_auto_failback": false, 00:22:11.412 "generate_uuids": false, 00:22:11.412 "transport_tos": 0, 00:22:11.412 "nvme_error_stat": false, 00:22:11.412 "rdma_srq_size": 0, 00:22:11.412 "io_path_stat": false, 00:22:11.412 "allow_accel_sequence": false, 00:22:11.412 "rdma_max_cq_size": 0, 00:22:11.412 "rdma_cm_event_timeout_ms": 0, 00:22:11.412 "dhchap_digests": [ 00:22:11.412 "sha256", 00:22:11.412 "sha384", 00:22:11.412 "sha512" 00:22:11.412 ], 00:22:11.412 "dhchap_dhgroups": [ 00:22:11.412 "null", 00:22:11.412 "ffdhe2048", 00:22:11.412 "ffdhe3072", 00:22:11.412 "ffdhe4096", 00:22:11.412 "ffdhe6144", 00:22:11.412 "ffdhe8192" 00:22:11.412 ] 00:22:11.412 } 00:22:11.412 }, 00:22:11.412 { 00:22:11.412 "method": "bdev_nvme_attach_controller", 00:22:11.412 "params": { 00:22:11.412 "name": "TLSTEST", 00:22:11.412 "trtype": "TCP", 00:22:11.412 "adrfam": "IPv4", 00:22:11.412 "traddr": "10.0.0.2", 00:22:11.412 "trsvcid": "4420", 00:22:11.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.412 "prchk_reftag": false, 00:22:11.412 "prchk_guard": false, 00:22:11.412 "ctrlr_loss_timeout_sec": 0, 00:22:11.412 "reconnect_delay_sec": 0, 00:22:11.412 "fast_io_fail_timeout_sec": 0, 00:22:11.412 "psk": "key0", 00:22:11.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:11.412 "hdgst": false, 00:22:11.412 "ddgst": false, 00:22:11.412 "multipath": "multipath" 00:22:11.412 } 00:22:11.412 }, 00:22:11.412 { 00:22:11.412 "method": "bdev_nvme_set_hotplug", 00:22:11.412 "params": { 00:22:11.412 "period_us": 100000, 00:22:11.412 "enable": false 00:22:11.412 } 00:22:11.412 }, 00:22:11.412 { 00:22:11.412 "method": "bdev_wait_for_examine" 00:22:11.412 } 00:22:11.412 ] 00:22:11.412 }, 00:22:11.412 { 00:22:11.412 "subsystem": "nbd", 00:22:11.412 "config": [] 00:22:11.412 } 00:22:11.412 ] 00:22:11.412 }' 00:22:11.412 [2024-11-19 09:39:58.048816] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:11.412 [2024-11-19 09:39:58.048869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358603 ] 00:22:11.412 [2024-11-19 09:39:58.131739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.673 [2024-11-19 09:39:58.160676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.673 [2024-11-19 09:39:58.294589] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:12.243 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.243 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:12.243 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:12.243 Running I/O for 10 seconds... 00:22:14.570 5265.00 IOPS, 20.57 MiB/s [2024-11-19T08:40:02.259Z] 5430.50 IOPS, 21.21 MiB/s [2024-11-19T08:40:03.199Z] 5623.00 IOPS, 21.96 MiB/s [2024-11-19T08:40:04.140Z] 5088.25 IOPS, 19.88 MiB/s [2024-11-19T08:40:05.083Z] 5283.80 IOPS, 20.64 MiB/s [2024-11-19T08:40:06.024Z] 5272.17 IOPS, 20.59 MiB/s [2024-11-19T08:40:06.966Z] 5095.57 IOPS, 19.90 MiB/s [2024-11-19T08:40:08.350Z] 4932.62 IOPS, 19.27 MiB/s [2024-11-19T08:40:09.291Z] 5047.00 IOPS, 19.71 MiB/s [2024-11-19T08:40:09.291Z] 5114.40 IOPS, 19.98 MiB/s 00:22:22.543 Latency(us) 00:22:22.543 [2024-11-19T08:40:09.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.543 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:22.543 Verification LBA range: start 0x0 length 0x2000 00:22:22.543 TLSTESTn1 : 10.03 5109.96 19.96 0.00 0.00 25003.46 4587.52 49588.91 00:22:22.543 [2024-11-19T08:40:09.291Z] =================================================================================================================== 00:22:22.543 [2024-11-19T08:40:09.291Z] Total : 5109.96 19.96 0.00 0.00 25003.46 4587.52 49588.91 00:22:22.543 { 00:22:22.543 "results": [ 00:22:22.543 { 00:22:22.543 "job": "TLSTESTn1", 00:22:22.543 "core_mask": "0x4", 00:22:22.543 "workload": "verify", 00:22:22.543 "status": "finished", 00:22:22.543 "verify_range": { 00:22:22.543 "start": 0, 00:22:22.543 "length": 8192 00:22:22.543 }, 00:22:22.543 "queue_depth": 128, 00:22:22.543 "io_size": 4096, 00:22:22.543 "runtime": 10.033537, 00:22:22.543 "iops": 5109.962718032534, 00:22:22.543 "mibps": 19.960791867314587, 00:22:22.543 "io_failed": 0, 00:22:22.543 "io_timeout": 0, 00:22:22.543 "avg_latency_us": 25003.460033157146, 00:22:22.543 "min_latency_us": 4587.52, 00:22:22.543 "max_latency_us": 49588.90666666667 00:22:22.543 } 00:22:22.543 ], 00:22:22.543 "core_count": 1 00:22:22.543 } 00:22:22.543 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:22.543 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 358603 00:22:22.543 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 358603 ']' 00:22:22.543 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 358603 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 358603 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 358603' 00:22:22.543 killing process with pid 358603 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 358603 00:22:22.543 Received shutdown signal, test time was about 10.000000 seconds 00:22:22.543 00:22:22.543 Latency(us) 00:22:22.543 [2024-11-19T08:40:09.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.543 [2024-11-19T08:40:09.291Z] =================================================================================================================== 00:22:22.543 [2024-11-19T08:40:09.291Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 358603 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 358255 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 358255 ']' 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 358255 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 358255 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 358255' 00:22:22.543 killing process with pid 358255 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 358255 00:22:22.543 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 358255 00:22:22.805 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:22:22.805 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:22.805 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.805 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.805 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=360628 00:22:22.805 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 360628 00:22:22.805 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:22.805 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 360628 ']' 00:22:22.805 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.805 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.805 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.805 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.805 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.805 [2024-11-19 09:40:09.401059] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:22.805 [2024-11-19 09:40:09.401115] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.805 [2024-11-19 09:40:09.496985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.805 [2024-11-19 09:40:09.543851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.805 [2024-11-19 09:40:09.543901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.805 [2024-11-19 09:40:09.543909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.805 [2024-11-19 09:40:09.543917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.805 [2024-11-19 09:40:09.543924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.805 [2024-11-19 09:40:09.544678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.749 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.749 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:23.749 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:23.749 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:23.749 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.749 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.749 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.OFt8sCkxfC 00:22:23.749 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OFt8sCkxfC 00:22:23.749 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:23.749 [2024-11-19 09:40:10.429108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.749 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:24.010 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:24.270 [2024-11-19 09:40:10.826113] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:24.271 [2024-11-19 09:40:10.826462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.271 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:24.532 malloc0 00:22:24.532 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:24.532 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OFt8sCkxfC 00:22:24.794 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:25.056 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=361164 00:22:25.056 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:25.056 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:25.056 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 361164 /var/tmp/bdevperf.sock 00:22:25.056 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 361164 ']' 00:22:25.056 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.056 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.056 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.056 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.056 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.056 [2024-11-19 09:40:11.690739] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:25.056 [2024-11-19 09:40:11.690816] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361164 ] 00:22:25.056 [2024-11-19 09:40:11.780150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.317 [2024-11-19 09:40:11.814547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.891 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.891 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:25.891 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OFt8sCkxfC 00:22:26.152 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:26.152 [2024-11-19 09:40:12.820432] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:26.413 nvme0n1 00:22:26.413 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:26.413 Running I/O for 1 seconds... 00:22:27.617 2343.00 IOPS, 9.15 MiB/s 00:22:27.617 Latency(us) 00:22:27.617 [2024-11-19T08:40:14.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.617 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:27.617 Verification LBA range: start 0x0 length 0x2000 00:22:27.617 nvme0n1 : 1.15 2155.27 8.42 0.00 0.00 57115.72 6662.83 186996.05 00:22:27.617 [2024-11-19T08:40:14.365Z] =================================================================================================================== 00:22:27.617 [2024-11-19T08:40:14.365Z] Total : 2155.27 8.42 0.00 0.00 57115.72 6662.83 186996.05 00:22:27.617 { 00:22:27.617 "results": [ 00:22:27.617 { 00:22:27.617 "job": "nvme0n1", 00:22:27.617 "core_mask": "0x2", 00:22:27.617 "workload": "verify", 00:22:27.617 "status": "finished", 00:22:27.617 "verify_range": { 00:22:27.617 "start": 0, 00:22:27.617 "length": 8192 00:22:27.617 }, 00:22:27.617 "queue_depth": 128, 00:22:27.617 "io_size": 4096, 00:22:27.617 "runtime": 1.146494, 00:22:27.617 "iops": 2155.266403487502, 00:22:27.617 "mibps": 8.419009388623055, 00:22:27.617 "io_failed": 0, 00:22:27.617 "io_timeout": 0, 00:22:27.617 "avg_latency_us": 57115.72245514636, 00:22:27.617 "min_latency_us": 6662.826666666667, 00:22:27.617 "max_latency_us": 186996.05333333334 00:22:27.617 } 00:22:27.617 ], 00:22:27.617 "core_count": 1 00:22:27.617 } 00:22:27.617 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 361164 00:22:27.617 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 361164 ']' 00:22:27.617 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 361164 00:22:27.617 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:27.617 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.617 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 361164 00:22:27.617 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:27.617 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:27.617 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 361164' 00:22:27.617 killing process with pid 361164 00:22:27.617 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 361164 00:22:27.617 Received shutdown signal, test time was about 1.000000 seconds 00:22:27.617 00:22:27.617 Latency(us) 00:22:27.617 [2024-11-19T08:40:14.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.617 [2024-11-19T08:40:14.365Z] =================================================================================================================== 00:22:27.617 [2024-11-19T08:40:14.365Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:27.617 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 361164 00:22:27.617 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 360628 00:22:27.617 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 360628 ']' 00:22:27.617 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 360628 00:22:27.617 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:27.617 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.617 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 360628 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 360628' 00:22:27.879 killing process with pid 360628 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 360628 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 360628 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=361673 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 361673 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 361673 ']' 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:27.879 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.879 [2024-11-19 09:40:14.606585] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:27.879 [2024-11-19 09:40:14.606640] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.142 [2024-11-19 09:40:14.702529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.142 [2024-11-19 09:40:14.750405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.142 [2024-11-19 09:40:14.750462] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.142 [2024-11-19 09:40:14.750471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.142 [2024-11-19 09:40:14.750478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.142 [2024-11-19 09:40:14.750484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.142 [2024-11-19 09:40:14.751263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.715 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.715 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:28.715 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:28.715 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:28.715 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.977 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.977 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:22:28.977 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.977 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.977 [2024-11-19 09:40:15.481970] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.977 malloc0 00:22:28.977 [2024-11-19 09:40:15.512037] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:28.977 [2024-11-19 09:40:15.512390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.977 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.977 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=362021 00:22:28.977 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 362021 /var/tmp/bdevperf.sock 00:22:28.977 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:28.977 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 362021 ']' 00:22:28.977 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.977 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.977 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.977 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.977 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.977 [2024-11-19 09:40:15.594715] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:28.977 [2024-11-19 09:40:15.594776] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362021 ] 00:22:28.977 [2024-11-19 09:40:15.682662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.977 [2024-11-19 09:40:15.716600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.918 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.918 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:29.918 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OFt8sCkxfC 00:22:29.918 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:30.179 [2024-11-19 09:40:16.678263] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:30.179 nvme0n1 00:22:30.179 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:30.179 Running I/O for 1 seconds... 00:22:31.563 1025.00 IOPS, 4.00 MiB/s 00:22:31.563 Latency(us) 00:22:31.563 [2024-11-19T08:40:18.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.563 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:31.563 Verification LBA range: start 0x0 length 0x2000 00:22:31.563 nvme0n1 : 1.09 1055.22 4.12 0.00 0.00 117082.60 6089.39 187869.87 00:22:31.563 [2024-11-19T08:40:18.311Z] =================================================================================================================== 00:22:31.563 [2024-11-19T08:40:18.311Z] Total : 1055.22 4.12 0.00 0.00 117082.60 6089.39 187869.87 00:22:31.563 { 00:22:31.563 "results": [ 00:22:31.563 { 00:22:31.563 "job": "nvme0n1", 00:22:31.563 "core_mask": "0x2", 00:22:31.563 "workload": "verify", 00:22:31.563 "status": "finished", 00:22:31.563 "verify_range": { 00:22:31.563 "start": 0, 00:22:31.563 "length": 8192 00:22:31.563 }, 00:22:31.563 "queue_depth": 128, 00:22:31.563 "io_size": 4096, 00:22:31.563 "runtime": 1.093611, 00:22:31.563 "iops": 1055.2198176499687, 00:22:31.563 "mibps": 4.12195241269519, 00:22:31.563 "io_failed": 0, 00:22:31.563 "io_timeout": 0, 00:22:31.563 "avg_latency_us": 117082.59826689775, 00:22:31.563 "min_latency_us": 6089.386666666666, 00:22:31.563 "max_latency_us": 187869.86666666667 00:22:31.563 } 00:22:31.563 ], 00:22:31.563 "core_count": 1 00:22:31.563 } 00:22:31.563 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:22:31.563 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.563 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.563 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.563 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:22:31.563 "subsystems": [ 00:22:31.563 { 00:22:31.563 "subsystem": "keyring", 00:22:31.563 "config": [ 00:22:31.563 { 00:22:31.563 "method": "keyring_file_add_key", 00:22:31.563 "params": { 00:22:31.563 "name": "key0", 00:22:31.563 "path": "/tmp/tmp.OFt8sCkxfC" 00:22:31.563 } 00:22:31.563 } 00:22:31.563 ] 00:22:31.563 }, 00:22:31.563 { 00:22:31.563 "subsystem": "iobuf", 00:22:31.563 "config": [ 00:22:31.563 { 00:22:31.563 "method": "iobuf_set_options", 00:22:31.563 "params": { 00:22:31.563 "small_pool_count": 8192, 00:22:31.563 "large_pool_count": 1024, 00:22:31.563 "small_bufsize": 8192, 00:22:31.563 "large_bufsize": 135168, 00:22:31.563 "enable_numa": false 00:22:31.563 } 00:22:31.563 } 00:22:31.563 ] 00:22:31.563 }, 00:22:31.563 { 00:22:31.563 "subsystem": "sock", 00:22:31.563 "config": [ 00:22:31.563 { 00:22:31.563 "method": "sock_set_default_impl", 00:22:31.563 "params": { 00:22:31.563 "impl_name": "posix" 00:22:31.563 } 00:22:31.563 }, 00:22:31.563 { 00:22:31.563 "method": "sock_impl_set_options", 00:22:31.563 "params": { 00:22:31.563 "impl_name": "ssl", 00:22:31.563 "recv_buf_size": 4096, 00:22:31.563 "send_buf_size": 4096, 00:22:31.563 "enable_recv_pipe": true, 00:22:31.563 "enable_quickack": false, 00:22:31.563 "enable_placement_id": 0, 00:22:31.563 "enable_zerocopy_send_server": true, 00:22:31.563 "enable_zerocopy_send_client": false, 00:22:31.563 "zerocopy_threshold": 0, 00:22:31.563 "tls_version": 0, 00:22:31.563 "enable_ktls": false 00:22:31.563 } 00:22:31.563 }, 00:22:31.563 { 00:22:31.563 "method": "sock_impl_set_options", 00:22:31.563 "params": { 00:22:31.563 "impl_name": "posix", 00:22:31.563 "recv_buf_size": 2097152, 00:22:31.563 "send_buf_size": 2097152, 00:22:31.563 "enable_recv_pipe": true, 00:22:31.563 "enable_quickack": false, 00:22:31.563 "enable_placement_id": 0, 00:22:31.563 "enable_zerocopy_send_server": true, 00:22:31.563 "enable_zerocopy_send_client": false, 00:22:31.563 "zerocopy_threshold": 0, 00:22:31.563 "tls_version": 0, 00:22:31.563 "enable_ktls": false 00:22:31.563 } 00:22:31.563 } 00:22:31.563 ] 00:22:31.563 }, 00:22:31.563 { 00:22:31.563 "subsystem": "vmd", 00:22:31.564 "config": [] 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "subsystem": "accel", 00:22:31.564 "config": [ 00:22:31.564 { 00:22:31.564 "method": "accel_set_options", 00:22:31.564 "params": { 00:22:31.564 "small_cache_size": 128, 00:22:31.564 "large_cache_size": 16, 00:22:31.564 "task_count": 2048, 00:22:31.564 "sequence_count": 2048, 00:22:31.564 "buf_count": 2048 00:22:31.564 } 00:22:31.564 } 00:22:31.564 ] 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "subsystem": "bdev", 00:22:31.564 "config": [ 00:22:31.564 { 00:22:31.564 "method": "bdev_set_options", 00:22:31.564 "params": { 00:22:31.564 "bdev_io_pool_size": 65535, 00:22:31.564 "bdev_io_cache_size": 256, 00:22:31.564 "bdev_auto_examine": true, 00:22:31.564 "iobuf_small_cache_size": 128, 00:22:31.564 "iobuf_large_cache_size": 16 00:22:31.564 } 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "method": "bdev_raid_set_options", 00:22:31.564 "params": { 00:22:31.564 "process_window_size_kb": 1024, 00:22:31.564 "process_max_bandwidth_mb_sec": 0 00:22:31.564 } 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "method": "bdev_iscsi_set_options", 00:22:31.564 "params": { 00:22:31.564 "timeout_sec": 30 00:22:31.564 } 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "method": "bdev_nvme_set_options", 00:22:31.564 "params": { 00:22:31.564 "action_on_timeout": "none", 00:22:31.564 "timeout_us": 0, 00:22:31.564 "timeout_admin_us": 0, 00:22:31.564 "keep_alive_timeout_ms": 10000, 00:22:31.564 "arbitration_burst": 0, 00:22:31.564 "low_priority_weight": 0, 00:22:31.564 "medium_priority_weight": 0, 00:22:31.564 "high_priority_weight": 0, 00:22:31.564 "nvme_adminq_poll_period_us": 10000, 00:22:31.564 "nvme_ioq_poll_period_us": 0, 00:22:31.564 "io_queue_requests": 0, 00:22:31.564 "delay_cmd_submit": true, 00:22:31.564 "transport_retry_count": 4, 00:22:31.564 "bdev_retry_count": 3, 00:22:31.564 "transport_ack_timeout": 0, 00:22:31.564 "ctrlr_loss_timeout_sec": 0, 00:22:31.564 "reconnect_delay_sec": 0, 00:22:31.564 "fast_io_fail_timeout_sec": 0, 00:22:31.564 "disable_auto_failback": false, 00:22:31.564 "generate_uuids": false, 00:22:31.564 "transport_tos": 0, 00:22:31.564 "nvme_error_stat": false, 00:22:31.564 "rdma_srq_size": 0, 00:22:31.564 "io_path_stat": false, 00:22:31.564 "allow_accel_sequence": false, 00:22:31.564 "rdma_max_cq_size": 0, 00:22:31.564 "rdma_cm_event_timeout_ms": 0, 00:22:31.564 "dhchap_digests": [ 00:22:31.564 "sha256", 00:22:31.564 "sha384", 00:22:31.564 "sha512" 00:22:31.564 ], 00:22:31.564 "dhchap_dhgroups": [ 00:22:31.564 "null", 00:22:31.564 "ffdhe2048", 00:22:31.564 "ffdhe3072", 00:22:31.564 "ffdhe4096", 00:22:31.564 "ffdhe6144", 00:22:31.564 "ffdhe8192" 00:22:31.564 ] 00:22:31.564 } 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "method": "bdev_nvme_set_hotplug", 00:22:31.564 "params": { 00:22:31.564 "period_us": 100000, 00:22:31.564 "enable": false 00:22:31.564 } 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "method": "bdev_malloc_create", 00:22:31.564 "params": { 00:22:31.564 "name": "malloc0", 00:22:31.564 "num_blocks": 8192, 00:22:31.564 "block_size": 4096, 00:22:31.564 "physical_block_size": 4096, 00:22:31.564 "uuid": "baef1679-afcd-40b3-a7c9-e1fd96a9b9a2", 00:22:31.564 "optimal_io_boundary": 0, 00:22:31.564 "md_size": 0, 00:22:31.564 "dif_type": 0, 00:22:31.564 "dif_is_head_of_md": false, 00:22:31.564 "dif_pi_format": 0 00:22:31.564 } 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "method": "bdev_wait_for_examine" 00:22:31.564 } 00:22:31.564 ] 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "subsystem": "nbd", 00:22:31.564 "config": [] 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "subsystem": "scheduler", 00:22:31.564 "config": [ 00:22:31.564 { 00:22:31.564 "method": "framework_set_scheduler", 00:22:31.564 "params": { 00:22:31.564 "name": "static" 00:22:31.564 } 00:22:31.564 } 00:22:31.564 ] 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "subsystem": "nvmf", 00:22:31.564 "config": [ 00:22:31.564 { 00:22:31.564 "method": "nvmf_set_config", 00:22:31.564 "params": { 00:22:31.564 "discovery_filter": "match_any", 00:22:31.564 "admin_cmd_passthru": { 00:22:31.564 "identify_ctrlr": false 00:22:31.564 }, 00:22:31.564 "dhchap_digests": [ 00:22:31.564 "sha256", 00:22:31.564 "sha384", 00:22:31.564 "sha512" 00:22:31.564 ], 00:22:31.564 "dhchap_dhgroups": [ 00:22:31.564 "null", 00:22:31.564 "ffdhe2048", 00:22:31.564 "ffdhe3072", 00:22:31.564 "ffdhe4096", 00:22:31.564 "ffdhe6144", 00:22:31.564 "ffdhe8192" 00:22:31.564 ] 00:22:31.564 } 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "method": "nvmf_set_max_subsystems", 00:22:31.564 "params": { 00:22:31.564 "max_subsystems": 1024 00:22:31.564 } 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "method": "nvmf_set_crdt", 00:22:31.564 "params": { 00:22:31.564 "crdt1": 0, 00:22:31.564 "crdt2": 0, 00:22:31.564 "crdt3": 0 00:22:31.564 } 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "method": "nvmf_create_transport", 00:22:31.564 "params": { 00:22:31.564 "trtype": "TCP", 00:22:31.564 "max_queue_depth": 128, 00:22:31.564 "max_io_qpairs_per_ctrlr": 127, 00:22:31.564 "in_capsule_data_size": 4096, 00:22:31.564 "max_io_size": 131072, 00:22:31.564 "io_unit_size": 131072, 00:22:31.564 "max_aq_depth": 128, 00:22:31.564 "num_shared_buffers": 511, 00:22:31.564 "buf_cache_size": 4294967295, 00:22:31.564 "dif_insert_or_strip": false, 00:22:31.564 "zcopy": false, 00:22:31.564 "c2h_success": false, 00:22:31.564 "sock_priority": 0, 00:22:31.564 "abort_timeout_sec": 1, 00:22:31.564 "ack_timeout": 0, 00:22:31.564 "data_wr_pool_size": 0 00:22:31.564 } 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "method": "nvmf_create_subsystem", 00:22:31.564 "params": { 00:22:31.564 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.564 "allow_any_host": false, 00:22:31.564 "serial_number": "00000000000000000000", 00:22:31.564 "model_number": "SPDK bdev Controller", 00:22:31.564 "max_namespaces": 32, 00:22:31.564 "min_cntlid": 1, 00:22:31.564 "max_cntlid": 65519, 00:22:31.564 "ana_reporting": false 00:22:31.564 } 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "method": "nvmf_subsystem_add_host", 00:22:31.564 "params": { 00:22:31.564 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.564 "host": "nqn.2016-06.io.spdk:host1", 00:22:31.564 "psk": "key0" 00:22:31.564 } 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "method": "nvmf_subsystem_add_ns", 00:22:31.564 "params": { 00:22:31.564 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.564 "namespace": { 00:22:31.564 "nsid": 1, 00:22:31.564 "bdev_name": "malloc0", 00:22:31.564 "nguid": "BAEF1679AFCD40B3A7C9E1FD96A9B9A2", 00:22:31.564 "uuid": "baef1679-afcd-40b3-a7c9-e1fd96a9b9a2", 00:22:31.564 "no_auto_visible": false 00:22:31.564 } 00:22:31.564 } 00:22:31.564 }, 00:22:31.564 { 00:22:31.564 "method": "nvmf_subsystem_add_listener", 00:22:31.564 "params": { 00:22:31.564 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.564 "listen_address": { 00:22:31.564 "trtype": "TCP", 00:22:31.564 "adrfam": "IPv4", 00:22:31.564 "traddr": "10.0.0.2", 00:22:31.564 "trsvcid": "4420" 00:22:31.564 }, 00:22:31.564 "secure_channel": false, 00:22:31.564 "sock_impl": "ssl" 00:22:31.564 } 00:22:31.564 } 00:22:31.564 ] 00:22:31.564 } 00:22:31.564 ] 00:22:31.564 }' 00:22:31.564 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:31.827 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:22:31.827 "subsystems": [ 00:22:31.827 { 00:22:31.827 "subsystem": "keyring", 00:22:31.827 "config": [ 00:22:31.827 { 00:22:31.827 "method": "keyring_file_add_key", 00:22:31.827 "params": { 00:22:31.827 "name": "key0", 00:22:31.827 "path": "/tmp/tmp.OFt8sCkxfC" 00:22:31.827 } 00:22:31.827 } 00:22:31.827 ] 00:22:31.827 }, 00:22:31.827 { 00:22:31.827 "subsystem": "iobuf", 00:22:31.827 "config": [ 00:22:31.827 { 00:22:31.827 "method": "iobuf_set_options", 00:22:31.827 "params": { 00:22:31.827 "small_pool_count": 8192, 00:22:31.827 "large_pool_count": 1024, 00:22:31.827 "small_bufsize": 8192, 00:22:31.827 "large_bufsize": 135168, 00:22:31.827 "enable_numa": false 00:22:31.827 } 00:22:31.827 } 00:22:31.827 ] 00:22:31.827 }, 00:22:31.827 { 00:22:31.827 "subsystem": "sock", 00:22:31.827 "config": [ 00:22:31.827 { 00:22:31.827 "method": "sock_set_default_impl", 00:22:31.827 "params": { 00:22:31.827 "impl_name": "posix" 00:22:31.827 } 00:22:31.827 }, 00:22:31.827 { 00:22:31.827 "method": "sock_impl_set_options", 00:22:31.827 "params": { 00:22:31.827 "impl_name": "ssl", 00:22:31.827 "recv_buf_size": 4096, 00:22:31.827 "send_buf_size": 4096, 00:22:31.827 "enable_recv_pipe": true, 00:22:31.827 "enable_quickack": false, 00:22:31.827 "enable_placement_id": 0, 00:22:31.827 "enable_zerocopy_send_server": true, 00:22:31.827 "enable_zerocopy_send_client": false, 00:22:31.827 "zerocopy_threshold": 0, 00:22:31.827 "tls_version": 0, 00:22:31.827 "enable_ktls": false 00:22:31.827 } 00:22:31.827 }, 00:22:31.827 { 00:22:31.827 "method": "sock_impl_set_options", 00:22:31.827 "params": { 00:22:31.827 "impl_name": "posix", 00:22:31.827 "recv_buf_size": 2097152, 00:22:31.827 "send_buf_size": 2097152, 00:22:31.827 "enable_recv_pipe": true, 00:22:31.827 "enable_quickack": false, 00:22:31.827 "enable_placement_id": 0, 00:22:31.827 "enable_zerocopy_send_server": true, 00:22:31.827 "enable_zerocopy_send_client": false, 00:22:31.827 "zerocopy_threshold": 0, 00:22:31.827 "tls_version": 0, 00:22:31.827 "enable_ktls": false 00:22:31.827 } 00:22:31.827 } 00:22:31.827 ] 00:22:31.827 }, 00:22:31.827 { 00:22:31.827 "subsystem": "vmd", 00:22:31.827 "config": [] 00:22:31.827 }, 00:22:31.827 { 00:22:31.827 "subsystem": "accel", 00:22:31.827 "config": [ 00:22:31.827 { 00:22:31.827 "method": "accel_set_options", 00:22:31.827 "params": { 00:22:31.827 "small_cache_size": 128, 00:22:31.827 "large_cache_size": 16, 00:22:31.827 "task_count": 2048, 00:22:31.827 "sequence_count": 2048, 00:22:31.827 "buf_count": 2048 00:22:31.827 } 00:22:31.827 } 00:22:31.827 ] 00:22:31.827 }, 00:22:31.827 { 00:22:31.827 "subsystem": "bdev", 00:22:31.827 "config": [ 00:22:31.827 { 00:22:31.827 "method": "bdev_set_options", 00:22:31.827 "params": { 00:22:31.827 "bdev_io_pool_size": 65535, 00:22:31.827 "bdev_io_cache_size": 256, 00:22:31.827 "bdev_auto_examine": true, 00:22:31.827 "iobuf_small_cache_size": 128, 00:22:31.827 "iobuf_large_cache_size": 16 00:22:31.827 } 00:22:31.827 }, 00:22:31.827 { 00:22:31.827 "method": "bdev_raid_set_options", 00:22:31.827 "params": { 00:22:31.827 "process_window_size_kb": 1024, 00:22:31.827 "process_max_bandwidth_mb_sec": 0 00:22:31.827 } 00:22:31.827 }, 00:22:31.827 { 00:22:31.827 "method": "bdev_iscsi_set_options", 00:22:31.827 "params": { 00:22:31.827 "timeout_sec": 30 00:22:31.827 } 00:22:31.827 }, 00:22:31.827 { 00:22:31.827 "method": "bdev_nvme_set_options", 00:22:31.827 "params": { 00:22:31.827 "action_on_timeout": "none", 00:22:31.827 "timeout_us": 0, 00:22:31.827 "timeout_admin_us": 0, 00:22:31.827 "keep_alive_timeout_ms": 10000, 00:22:31.827 "arbitration_burst": 0, 00:22:31.827 "low_priority_weight": 0, 00:22:31.827 "medium_priority_weight": 0, 00:22:31.827 "high_priority_weight": 0, 00:22:31.827 "nvme_adminq_poll_period_us": 10000, 00:22:31.827 "nvme_ioq_poll_period_us": 0, 00:22:31.827 "io_queue_requests": 512, 00:22:31.827 "delay_cmd_submit": true, 00:22:31.827 "transport_retry_count": 4, 00:22:31.827 "bdev_retry_count": 3, 00:22:31.827 "transport_ack_timeout": 0, 00:22:31.827 "ctrlr_loss_timeout_sec": 0, 00:22:31.827 "reconnect_delay_sec": 0, 00:22:31.827 "fast_io_fail_timeout_sec": 0, 00:22:31.827 "disable_auto_failback": false, 00:22:31.827 "generate_uuids": false, 00:22:31.827 "transport_tos": 0, 00:22:31.827 "nvme_error_stat": false, 00:22:31.827 "rdma_srq_size": 0, 00:22:31.827 "io_path_stat": false, 00:22:31.827 "allow_accel_sequence": false, 00:22:31.827 "rdma_max_cq_size": 0, 00:22:31.827 "rdma_cm_event_timeout_ms": 0, 00:22:31.827 "dhchap_digests": [ 00:22:31.827 "sha256", 00:22:31.827 "sha384", 00:22:31.827 "sha512" 00:22:31.827 ], 00:22:31.827 "dhchap_dhgroups": [ 00:22:31.827 "null", 00:22:31.827 "ffdhe2048", 00:22:31.828 "ffdhe3072", 00:22:31.828 "ffdhe4096", 00:22:31.828 "ffdhe6144", 00:22:31.828 "ffdhe8192" 00:22:31.828 ] 00:22:31.828 } 00:22:31.828 }, 00:22:31.828 { 00:22:31.828 "method": "bdev_nvme_attach_controller", 00:22:31.828 "params": { 00:22:31.828 "name": "nvme0", 00:22:31.828 "trtype": "TCP", 00:22:31.828 "adrfam": "IPv4", 00:22:31.828 "traddr": "10.0.0.2", 00:22:31.828 "trsvcid": "4420", 00:22:31.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.828 "prchk_reftag": false, 00:22:31.828 "prchk_guard": false, 00:22:31.828 "ctrlr_loss_timeout_sec": 0, 00:22:31.828 "reconnect_delay_sec": 0, 00:22:31.828 "fast_io_fail_timeout_sec": 0, 00:22:31.828 "psk": "key0", 00:22:31.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:31.828 "hdgst": false, 00:22:31.828 "ddgst": false, 00:22:31.828 "multipath": "multipath" 00:22:31.828 } 00:22:31.828 }, 00:22:31.828 { 00:22:31.828 "method": "bdev_nvme_set_hotplug", 00:22:31.828 "params": { 00:22:31.828 "period_us": 100000, 00:22:31.828 "enable": false 00:22:31.828 } 00:22:31.828 }, 00:22:31.828 { 00:22:31.828 "method": "bdev_enable_histogram", 00:22:31.828 "params": { 00:22:31.828 "name": "nvme0n1", 00:22:31.828 "enable": true 00:22:31.828 } 00:22:31.828 }, 00:22:31.828 { 00:22:31.828 "method": "bdev_wait_for_examine" 00:22:31.828 } 00:22:31.828 ] 00:22:31.828 }, 00:22:31.828 { 00:22:31.828 "subsystem": "nbd", 00:22:31.828 "config": [] 00:22:31.828 } 00:22:31.828 ] 00:22:31.828 }' 00:22:31.828 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 362021 00:22:31.828 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 362021 ']' 00:22:31.828 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 362021 00:22:31.828 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:31.828 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.828 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362021 00:22:31.828 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:31.828 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:31.828 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362021' 00:22:31.828 killing process with pid 362021 00:22:31.828 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 362021 00:22:31.828 Received shutdown signal, test time was about 1.000000 seconds 00:22:31.828 00:22:31.828 Latency(us) 00:22:31.828 [2024-11-19T08:40:18.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.828 [2024-11-19T08:40:18.576Z] =================================================================================================================== 00:22:31.828 [2024-11-19T08:40:18.576Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:31.828 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 362021 00:22:31.828 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 361673 00:22:31.828 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 361673 ']' 00:22:31.828 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 361673 00:22:31.828 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:31.828 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.828 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 361673 00:22:32.090 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.090 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.090 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 361673' 00:22:32.090 killing process with pid 361673 00:22:32.090 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 361673 00:22:32.090 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 361673 00:22:32.090 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:22:32.090 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:32.090 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:32.090 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:22:32.090 "subsystems": [ 00:22:32.090 { 00:22:32.090 "subsystem": "keyring", 00:22:32.090 "config": [ 00:22:32.090 { 00:22:32.090 "method": "keyring_file_add_key", 00:22:32.090 "params": { 00:22:32.090 "name": "key0", 00:22:32.090 "path": "/tmp/tmp.OFt8sCkxfC" 00:22:32.090 } 00:22:32.090 } 00:22:32.090 ] 00:22:32.090 }, 00:22:32.090 { 00:22:32.090 "subsystem": "iobuf", 00:22:32.090 "config": [ 00:22:32.090 { 00:22:32.090 "method": "iobuf_set_options", 00:22:32.090 "params": { 00:22:32.090 "small_pool_count": 8192, 00:22:32.090 "large_pool_count": 1024, 00:22:32.090 "small_bufsize": 8192, 00:22:32.090 "large_bufsize": 135168, 00:22:32.090 "enable_numa": false 00:22:32.090 } 00:22:32.090 } 00:22:32.090 ] 00:22:32.090 }, 00:22:32.090 { 00:22:32.090 "subsystem": "sock", 00:22:32.090 "config": [ 00:22:32.090 { 00:22:32.090 "method": "sock_set_default_impl", 00:22:32.090 "params": { 00:22:32.090 "impl_name": "posix" 00:22:32.090 } 00:22:32.090 }, 00:22:32.090 { 00:22:32.090 "method": "sock_impl_set_options", 00:22:32.090 "params": { 00:22:32.090 "impl_name": "ssl", 00:22:32.090 "recv_buf_size": 4096, 00:22:32.091 "send_buf_size": 4096, 00:22:32.091 "enable_recv_pipe": true, 00:22:32.091 "enable_quickack": false, 00:22:32.091 "enable_placement_id": 0, 00:22:32.091 "enable_zerocopy_send_server": true, 00:22:32.091 "enable_zerocopy_send_client": false, 00:22:32.091 "zerocopy_threshold": 0, 00:22:32.091 "tls_version": 0, 00:22:32.091 "enable_ktls": false 00:22:32.091 } 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "method": "sock_impl_set_options", 00:22:32.091 "params": { 00:22:32.091 "impl_name": "posix", 00:22:32.091 "recv_buf_size": 2097152, 00:22:32.091 "send_buf_size": 2097152, 00:22:32.091 "enable_recv_pipe": true, 00:22:32.091 "enable_quickack": false, 00:22:32.091 "enable_placement_id": 0, 00:22:32.091 "enable_zerocopy_send_server": true, 00:22:32.091 "enable_zerocopy_send_client": false, 00:22:32.091 "zerocopy_threshold": 0, 00:22:32.091 "tls_version": 0, 00:22:32.091 "enable_ktls": false 00:22:32.091 } 00:22:32.091 } 00:22:32.091 ] 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "subsystem": "vmd", 00:22:32.091 "config": [] 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "subsystem": "accel", 00:22:32.091 "config": [ 00:22:32.091 { 00:22:32.091 "method": "accel_set_options", 00:22:32.091 "params": { 00:22:32.091 "small_cache_size": 128, 00:22:32.091 "large_cache_size": 16, 00:22:32.091 "task_count": 2048, 00:22:32.091 "sequence_count": 2048, 00:22:32.091 "buf_count": 2048 00:22:32.091 } 00:22:32.091 } 00:22:32.091 ] 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "subsystem": "bdev", 00:22:32.091 "config": [ 00:22:32.091 { 00:22:32.091 "method": "bdev_set_options", 00:22:32.091 "params": { 00:22:32.091 "bdev_io_pool_size": 65535, 00:22:32.091 "bdev_io_cache_size": 256, 00:22:32.091 "bdev_auto_examine": true, 00:22:32.091 "iobuf_small_cache_size": 128, 00:22:32.091 "iobuf_large_cache_size": 16 00:22:32.091 } 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "method": "bdev_raid_set_options", 00:22:32.091 "params": { 00:22:32.091 "process_window_size_kb": 1024, 00:22:32.091 "process_max_bandwidth_mb_sec": 0 00:22:32.091 } 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "method": "bdev_iscsi_set_options", 00:22:32.091 "params": { 00:22:32.091 "timeout_sec": 30 00:22:32.091 } 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "method": "bdev_nvme_set_options", 00:22:32.091 "params": { 00:22:32.091 "action_on_timeout": "none", 00:22:32.091 "timeout_us": 0, 00:22:32.091 "timeout_admin_us": 0, 00:22:32.091 "keep_alive_timeout_ms": 10000, 00:22:32.091 "arbitration_burst": 0, 00:22:32.091 "low_priority_weight": 0, 00:22:32.091 "medium_priority_weight": 0, 00:22:32.091 "high_priority_weight": 0, 00:22:32.091 "nvme_adminq_poll_period_us": 10000, 00:22:32.091 "nvme_ioq_poll_period_us": 0, 00:22:32.091 "io_queue_requests": 0, 00:22:32.091 "delay_cmd_submit": true, 00:22:32.091 "transport_retry_count": 4, 00:22:32.091 "bdev_retry_count": 3, 00:22:32.091 "transport_ack_timeout": 0, 00:22:32.091 "ctrlr_loss_timeout_sec": 0, 00:22:32.091 "reconnect_delay_sec": 0, 00:22:32.091 "fast_io_fail_timeout_sec": 0, 00:22:32.091 "disable_auto_failback": false, 00:22:32.091 "generate_uuids": false, 00:22:32.091 "transport_tos": 0, 00:22:32.091 "nvme_error_stat": false, 00:22:32.091 "rdma_srq_size": 0, 00:22:32.091 "io_path_stat": false, 00:22:32.091 "allow_accel_sequence": false, 00:22:32.091 "rdma_max_cq_size": 0, 00:22:32.091 "rdma_cm_event_timeout_ms": 0, 00:22:32.091 "dhchap_digests": [ 00:22:32.091 "sha256", 00:22:32.091 "sha384", 00:22:32.091 "sha512" 00:22:32.091 ], 00:22:32.091 "dhchap_dhgroups": [ 00:22:32.091 "null", 00:22:32.091 "ffdhe2048", 00:22:32.091 "ffdhe3072", 00:22:32.091 "ffdhe4096", 00:22:32.091 "ffdhe6144", 00:22:32.091 "ffdhe8192" 00:22:32.091 ] 00:22:32.091 } 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "method": "bdev_nvme_set_hotplug", 00:22:32.091 "params": { 00:22:32.091 "period_us": 100000, 00:22:32.091 "enable": false 00:22:32.091 } 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "method": "bdev_malloc_create", 00:22:32.091 "params": { 00:22:32.091 "name": "malloc0", 00:22:32.091 "num_blocks": 8192, 00:22:32.091 "block_size": 4096, 00:22:32.091 "physical_block_size": 4096, 00:22:32.091 "uuid": "baef1679-afcd-40b3-a7c9-e1fd96a9b9a2", 00:22:32.091 "optimal_io_boundary": 0, 00:22:32.091 "md_size": 0, 00:22:32.091 "dif_type": 0, 00:22:32.091 "dif_is_head_of_md": false, 00:22:32.091 "dif_pi_format": 0 00:22:32.091 } 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "method": "bdev_wait_for_examine" 00:22:32.091 } 00:22:32.091 ] 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "subsystem": "nbd", 00:22:32.091 "config": [] 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "subsystem": "scheduler", 00:22:32.091 "config": [ 00:22:32.091 { 00:22:32.091 "method": "framework_set_scheduler", 00:22:32.091 "params": { 00:22:32.091 "name": "static" 00:22:32.091 } 00:22:32.091 } 00:22:32.091 ] 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "subsystem": "nvmf", 00:22:32.091 "config": [ 00:22:32.091 { 00:22:32.091 "method": "nvmf_set_config", 00:22:32.091 "params": { 00:22:32.091 "discovery_filter": "match_any", 00:22:32.091 "admin_cmd_passthru": { 00:22:32.091 "identify_ctrlr": false 00:22:32.091 }, 00:22:32.091 "dhchap_digests": [ 00:22:32.091 "sha256", 00:22:32.091 "sha384", 00:22:32.091 "sha512" 00:22:32.091 ], 00:22:32.091 "dhchap_dhgroups": [ 00:22:32.091 "null", 00:22:32.091 "ffdhe2048", 00:22:32.091 "ffdhe3072", 00:22:32.091 "ffdhe4096", 00:22:32.091 "ffdhe6144", 00:22:32.091 "ffdhe8192" 00:22:32.091 ] 00:22:32.091 } 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "method": "nvmf_set_max_subsystems", 00:22:32.091 "params": { 00:22:32.091 "max_subsystems": 1024 00:22:32.091 } 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "method": "nvmf_set_crdt", 00:22:32.091 "params": { 00:22:32.091 "crdt1": 0, 00:22:32.091 "crdt2": 0, 00:22:32.091 "crdt3": 0 00:22:32.091 } 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "method": "nvmf_create_transport", 00:22:32.091 "params": { 00:22:32.091 "trtype": "TCP", 00:22:32.091 "max_queue_depth": 128, 00:22:32.091 "max_io_qpairs_per_ctrlr": 127, 00:22:32.091 "in_capsule_data_size": 4096, 00:22:32.091 "max_io_size": 131072, 00:22:32.091 "io_unit_size": 131072, 00:22:32.091 "max_aq_depth": 128, 00:22:32.091 "num_shared_buffers": 511, 00:22:32.091 "buf_cache_size": 4294967295, 00:22:32.091 "dif_insert_or_strip": false, 00:22:32.091 "zcopy": false, 00:22:32.091 "c2h_success": false, 00:22:32.091 "sock_priority": 0, 00:22:32.091 "abort_timeout_sec": 1, 00:22:32.091 "ack_timeout": 0, 00:22:32.091 "data_wr_pool_size": 0 00:22:32.091 } 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "method": "nvmf_create_subsystem", 00:22:32.091 "params": { 00:22:32.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.091 "allow_any_host": false, 00:22:32.091 "serial_number": "00000000000000000000", 00:22:32.091 "model_number": "SPDK bdev Controller", 00:22:32.091 "max_namespaces": 32, 00:22:32.091 "min_cntlid": 1, 00:22:32.091 "max_cntlid": 65519, 00:22:32.091 "ana_reporting": false 00:22:32.091 } 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "method": "nvmf_subsystem_add_host", 00:22:32.091 "params": { 00:22:32.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.091 "host": "nqn.2016-06.io.spdk:host1", 00:22:32.091 "psk": "key0" 00:22:32.091 } 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "method": "nvmf_subsystem_add_ns", 00:22:32.091 "params": { 00:22:32.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.091 "namespace": { 00:22:32.091 "nsid": 1, 00:22:32.091 "bdev_name": "malloc0", 00:22:32.091 "nguid": "BAEF1679AFCD40B3A7C9E1FD96A9B9A2", 00:22:32.091 "uuid": "baef1679-afcd-40b3-a7c9-e1fd96a9b9a2", 00:22:32.091 "no_auto_visible": false 00:22:32.091 } 00:22:32.091 } 00:22:32.091 }, 00:22:32.091 { 00:22:32.091 "method": "nvmf_subsystem_add_listener", 00:22:32.091 "params": { 00:22:32.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.091 "listen_address": { 00:22:32.091 "trtype": "TCP", 00:22:32.091 "adrfam": "IPv4", 00:22:32.091 "traddr": "10.0.0.2", 00:22:32.091 "trsvcid": "4420" 00:22:32.091 }, 00:22:32.091 "secure_channel": false, 00:22:32.091 "sock_impl": "ssl" 00:22:32.091 } 00:22:32.091 } 00:22:32.091 ] 00:22:32.091 } 00:22:32.091 ] 00:22:32.091 }' 00:22:32.091 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.091 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=362574 00:22:32.091 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 362574 00:22:32.092 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:32.092 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 362574 ']' 00:22:32.092 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.092 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.092 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.092 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.092 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.092 [2024-11-19 09:40:18.752568] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:32.092 [2024-11-19 09:40:18.752627] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.352 [2024-11-19 09:40:18.842768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.352 [2024-11-19 09:40:18.872449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.352 [2024-11-19 09:40:18.872479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.352 [2024-11-19 09:40:18.872485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.352 [2024-11-19 09:40:18.872489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.352 [2024-11-19 09:40:18.872494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.352 [2024-11-19 09:40:18.872982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.352 [2024-11-19 09:40:19.065953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.618 [2024-11-19 09:40:19.097986] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:32.618 [2024-11-19 09:40:19.098187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.880 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.880 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:32.880 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:32.880 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:32.880 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.880 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.880 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=362734 00:22:32.880 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 362734 /var/tmp/bdevperf.sock 00:22:32.880 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 362734 ']' 00:22:32.880 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.880 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.880 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.880 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:32.880 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.880 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.880 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:22:32.880 "subsystems": [ 00:22:32.880 { 00:22:32.880 "subsystem": "keyring", 00:22:32.880 "config": [ 00:22:32.880 { 00:22:32.880 "method": "keyring_file_add_key", 00:22:32.880 "params": { 00:22:32.880 "name": "key0", 00:22:32.880 "path": "/tmp/tmp.OFt8sCkxfC" 00:22:32.880 } 00:22:32.880 } 00:22:32.880 ] 00:22:32.880 }, 00:22:32.880 { 00:22:32.880 "subsystem": "iobuf", 00:22:32.880 "config": [ 00:22:32.880 { 00:22:32.880 "method": "iobuf_set_options", 00:22:32.880 "params": { 00:22:32.880 "small_pool_count": 8192, 00:22:32.881 "large_pool_count": 1024, 00:22:32.881 "small_bufsize": 8192, 00:22:32.881 "large_bufsize": 135168, 00:22:32.881 "enable_numa": false 00:22:32.881 } 00:22:32.881 } 00:22:32.881 ] 00:22:32.881 }, 00:22:32.881 { 00:22:32.881 "subsystem": "sock", 00:22:32.881 "config": [ 00:22:32.881 { 00:22:32.881 "method": "sock_set_default_impl", 00:22:32.881 "params": { 00:22:32.881 "impl_name": "posix" 00:22:32.881 } 00:22:32.881 }, 00:22:32.881 { 00:22:32.881 "method": "sock_impl_set_options", 00:22:32.881 "params": { 00:22:32.881 "impl_name": "ssl", 00:22:32.881 "recv_buf_size": 4096, 00:22:32.881 "send_buf_size": 4096, 00:22:32.881 "enable_recv_pipe": true, 00:22:32.881 "enable_quickack": false, 00:22:32.881 "enable_placement_id": 0, 00:22:32.881 "enable_zerocopy_send_server": true, 00:22:32.881 "enable_zerocopy_send_client": false, 00:22:32.881 "zerocopy_threshold": 0, 00:22:32.881 "tls_version": 0, 00:22:32.881 "enable_ktls": false 00:22:32.881 } 00:22:32.881 }, 00:22:32.881 { 00:22:32.881 "method": "sock_impl_set_options", 00:22:32.881 "params": { 00:22:32.881 "impl_name": "posix", 00:22:32.881 "recv_buf_size": 2097152, 00:22:32.881 "send_buf_size": 2097152, 00:22:32.881 "enable_recv_pipe": true, 00:22:32.881 "enable_quickack": false, 00:22:32.881 "enable_placement_id": 0, 00:22:32.881 "enable_zerocopy_send_server": true, 00:22:32.881 "enable_zerocopy_send_client": false, 00:22:32.881 "zerocopy_threshold": 0, 00:22:32.881 "tls_version": 0, 00:22:32.881 "enable_ktls": false 00:22:32.881 } 00:22:32.881 } 00:22:32.881 ] 00:22:32.881 }, 00:22:32.881 { 00:22:32.881 "subsystem": "vmd", 00:22:32.881 "config": [] 00:22:32.881 }, 00:22:32.881 { 00:22:32.881 "subsystem": "accel", 00:22:32.881 "config": [ 00:22:32.881 { 00:22:32.881 "method": "accel_set_options", 00:22:32.881 "params": { 00:22:32.881 "small_cache_size": 128, 00:22:32.881 "large_cache_size": 16, 00:22:32.881 "task_count": 2048, 00:22:32.881 "sequence_count": 2048, 00:22:32.881 "buf_count": 2048 00:22:32.881 } 00:22:32.881 } 00:22:32.881 ] 00:22:32.881 }, 00:22:32.881 { 00:22:32.881 "subsystem": "bdev", 00:22:32.881 "config": [ 00:22:32.881 { 00:22:32.881 "method": "bdev_set_options", 00:22:32.881 "params": { 00:22:32.881 "bdev_io_pool_size": 65535, 00:22:32.881 "bdev_io_cache_size": 256, 00:22:32.881 "bdev_auto_examine": true, 00:22:32.881 "iobuf_small_cache_size": 128, 00:22:32.881 "iobuf_large_cache_size": 16 00:22:32.881 } 00:22:32.881 }, 00:22:32.881 { 00:22:32.881 "method": "bdev_raid_set_options", 00:22:32.881 "params": { 00:22:32.881 "process_window_size_kb": 1024, 00:22:32.881 "process_max_bandwidth_mb_sec": 0 00:22:32.881 } 00:22:32.881 }, 00:22:32.881 { 00:22:32.881 "method": "bdev_iscsi_set_options", 00:22:32.881 "params": { 00:22:32.881 "timeout_sec": 30 00:22:32.881 } 00:22:32.881 }, 00:22:32.881 { 00:22:32.881 "method": "bdev_nvme_set_options", 00:22:32.881 "params": { 00:22:32.881 "action_on_timeout": "none", 00:22:32.881 "timeout_us": 0, 00:22:32.881 "timeout_admin_us": 0, 00:22:32.881 "keep_alive_timeout_ms": 10000, 00:22:32.881 "arbitration_burst": 0, 00:22:32.881 "low_priority_weight": 0, 00:22:32.881 "medium_priority_weight": 0, 00:22:32.881 "high_priority_weight": 0, 00:22:32.881 "nvme_adminq_poll_period_us": 10000, 00:22:32.881 "nvme_ioq_poll_period_us": 0, 00:22:32.881 "io_queue_requests": 512, 00:22:32.881 "delay_cmd_submit": true, 00:22:32.881 "transport_retry_count": 4, 00:22:32.881 "bdev_retry_count": 3, 00:22:32.881 "transport_ack_timeout": 0, 00:22:32.881 "ctrlr_loss_timeout_sec": 0, 00:22:32.881 "reconnect_delay_sec": 0, 00:22:32.881 "fast_io_fail_timeout_sec": 0, 00:22:32.881 "disable_auto_failback": false, 00:22:32.881 "generate_uuids": false, 00:22:32.881 "transport_tos": 0, 00:22:32.881 "nvme_error_stat": false, 00:22:32.881 "rdma_srq_size": 0, 00:22:32.881 "io_path_stat": false, 00:22:32.881 "allow_accel_sequence": false, 00:22:32.881 "rdma_max_cq_size": 0, 00:22:32.881 "rdma_cm_event_timeout_ms": 0, 00:22:32.881 "dhchap_digests": [ 00:22:32.881 "sha256", 00:22:32.881 "sha384", 00:22:32.881 "sha512" 00:22:32.881 ], 00:22:32.881 "dhchap_dhgroups": [ 00:22:32.881 "null", 00:22:32.881 "ffdhe2048", 00:22:32.881 "ffdhe3072", 00:22:32.881 "ffdhe4096", 00:22:32.881 "ffdhe6144", 00:22:32.881 "ffdhe8192" 00:22:32.881 ] 00:22:32.881 } 00:22:32.881 }, 00:22:32.881 { 00:22:32.881 "method": "bdev_nvme_attach_controller", 00:22:32.881 "params": { 00:22:32.882 "name": "nvme0", 00:22:32.882 "trtype": "TCP", 00:22:32.882 "adrfam": "IPv4", 00:22:32.882 "traddr": "10.0.0.2", 00:22:32.882 "trsvcid": "4420", 00:22:32.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.882 "prchk_reftag": false, 00:22:32.882 "prchk_guard": false, 00:22:32.882 "ctrlr_loss_timeout_sec": 0, 00:22:32.882 "reconnect_delay_sec": 0, 00:22:32.882 "fast_io_fail_timeout_sec": 0, 00:22:32.882 "psk": "key0", 00:22:32.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.882 "hdgst": false, 00:22:32.882 "ddgst": false, 00:22:32.882 "multipath": "multipath" 00:22:32.882 } 00:22:32.882 }, 00:22:32.882 { 00:22:32.882 "method": "bdev_nvme_set_hotplug", 00:22:32.882 "params": { 00:22:32.882 "period_us": 100000, 00:22:32.882 "enable": false 00:22:32.882 } 00:22:32.882 }, 00:22:32.882 { 00:22:32.882 "method": "bdev_enable_histogram", 00:22:32.882 "params": { 00:22:32.882 "name": "nvme0n1", 00:22:32.882 "enable": true 00:22:32.882 } 00:22:32.882 }, 00:22:32.882 { 00:22:32.882 "method": "bdev_wait_for_examine" 00:22:32.882 } 00:22:32.882 ] 00:22:32.882 }, 00:22:32.882 { 00:22:32.882 "subsystem": "nbd", 00:22:32.882 "config": [] 00:22:32.882 } 00:22:32.882 ] 00:22:32.882 }' 00:22:33.142 [2024-11-19 09:40:19.639285] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:33.142 [2024-11-19 09:40:19.639339] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362734 ] 00:22:33.142 [2024-11-19 09:40:19.723083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.142 [2024-11-19 09:40:19.752888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.403 [2024-11-19 09:40:19.887651] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:33.973 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.973 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:33.973 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:33.973 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:22:33.973 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.973 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:33.973 Running I/O for 1 seconds... 00:22:35.357 1065.00 IOPS, 4.16 MiB/s 00:22:35.357 Latency(us) 00:22:35.357 [2024-11-19T08:40:22.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.357 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:35.357 Verification LBA range: start 0x0 length 0x2000 00:22:35.357 nvme0n1 : 1.11 1074.56 4.20 0.00 0.00 114510.61 5761.71 198355.63 00:22:35.357 [2024-11-19T08:40:22.106Z] =================================================================================================================== 00:22:35.358 [2024-11-19T08:40:22.106Z] Total : 1074.56 4.20 0.00 0.00 114510.61 5761.71 198355.63 00:22:35.358 { 00:22:35.358 "results": [ 00:22:35.358 { 00:22:35.358 "job": "nvme0n1", 00:22:35.358 "core_mask": "0x2", 00:22:35.358 "workload": "verify", 00:22:35.358 "status": "finished", 00:22:35.358 "verify_range": { 00:22:35.358 "start": 0, 00:22:35.358 "length": 8192 00:22:35.358 }, 00:22:35.358 "queue_depth": 128, 00:22:35.358 "io_size": 4096, 00:22:35.358 "runtime": 1.111154, 00:22:35.358 "iops": 1074.5585220410492, 00:22:35.358 "mibps": 4.197494226722848, 00:22:35.358 "io_failed": 0, 00:22:35.358 "io_timeout": 0, 00:22:35.358 "avg_latency_us": 114510.60958123952, 00:22:35.358 "min_latency_us": 5761.706666666667, 00:22:35.358 "max_latency_us": 198355.62666666668 00:22:35.358 } 00:22:35.358 ], 00:22:35.358 "core_count": 1 00:22:35.358 } 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:35.358 nvmf_trace.0 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 362734 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 362734 ']' 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 362734 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.358 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362734 00:22:35.358 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:35.358 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:35.358 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362734' 00:22:35.358 killing process with pid 362734 00:22:35.358 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 362734 00:22:35.358 Received shutdown signal, test time was about 1.000000 seconds 00:22:35.358 00:22:35.358 Latency(us) 00:22:35.358 [2024-11-19T08:40:22.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.358 [2024-11-19T08:40:22.106Z] =================================================================================================================== 00:22:35.358 [2024-11-19T08:40:22.106Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.358 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 362734 00:22:35.619 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:35.619 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.620 rmmod nvme_tcp 00:22:35.620 rmmod nvme_fabrics 00:22:35.620 rmmod nvme_keyring 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 362574 ']' 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 362574 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 362574 ']' 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 362574 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362574 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362574' 00:22:35.620 killing process with pid 362574 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 362574 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 362574 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.620 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.167 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.BDpNlVjCsn /tmp/tmp.ufSqlB3t9G /tmp/tmp.OFt8sCkxfC 00:22:38.168 00:22:38.168 real 1m28.251s 00:22:38.168 user 2m21.717s 00:22:38.168 sys 0m25.104s 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.168 ************************************ 00:22:38.168 END TEST nvmf_tls 00:22:38.168 ************************************ 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:38.168 ************************************ 00:22:38.168 START TEST nvmf_fips 00:22:38.168 ************************************ 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:38.168 * Looking for test storage... 00:22:38.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:38.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.168 --rc genhtml_branch_coverage=1 00:22:38.168 --rc genhtml_function_coverage=1 00:22:38.168 --rc genhtml_legend=1 00:22:38.168 --rc geninfo_all_blocks=1 00:22:38.168 --rc geninfo_unexecuted_blocks=1 00:22:38.168 00:22:38.168 ' 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:38.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.168 --rc genhtml_branch_coverage=1 00:22:38.168 --rc genhtml_function_coverage=1 00:22:38.168 --rc genhtml_legend=1 00:22:38.168 --rc geninfo_all_blocks=1 00:22:38.168 --rc geninfo_unexecuted_blocks=1 00:22:38.168 00:22:38.168 ' 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:38.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.168 --rc genhtml_branch_coverage=1 00:22:38.168 --rc genhtml_function_coverage=1 00:22:38.168 --rc genhtml_legend=1 00:22:38.168 --rc geninfo_all_blocks=1 00:22:38.168 --rc geninfo_unexecuted_blocks=1 00:22:38.168 00:22:38.168 ' 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:38.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.168 --rc genhtml_branch_coverage=1 00:22:38.168 --rc genhtml_function_coverage=1 00:22:38.168 --rc genhtml_legend=1 00:22:38.168 --rc geninfo_all_blocks=1 00:22:38.168 --rc geninfo_unexecuted_blocks=1 00:22:38.168 00:22:38.168 ' 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.168 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:38.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:38.169 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:38.430 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:38.430 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:38.430 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:38.430 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:38.430 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:22:38.430 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:38.430 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:38.430 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:22:38.430 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.430 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:22:38.430 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.430 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:22:38.430 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.430 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:22:38.430 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:22:38.430 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:22:38.430 Error setting digest 00:22:38.430 4042C69E177F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:38.430 4042C69E177F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:38.430 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:22:38.431 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:38.431 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:38.431 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:38.431 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:38.431 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:38.431 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.431 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:38.431 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:38.431 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:38.431 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.431 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.431 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.431 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:38.431 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:38.431 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:22:38.431 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:46.570 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:46.570 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:46.570 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:46.570 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:46.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:22:46.570 00:22:46.570 --- 10.0.0.2 ping statistics --- 00:22:46.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.570 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:22:46.570 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:22:46.570 00:22:46.570 --- 10.0.0.1 ping statistics --- 00:22:46.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.570 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=367448 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 367448 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 367448 ']' 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.571 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:46.571 [2024-11-19 09:40:32.517848] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:46.571 [2024-11-19 09:40:32.517922] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.571 [2024-11-19 09:40:32.616350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.571 [2024-11-19 09:40:32.667148] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.571 [2024-11-19 09:40:32.667209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.571 [2024-11-19 09:40:32.667218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.571 [2024-11-19 09:40:32.667225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.571 [2024-11-19 09:40:32.667232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.571 [2024-11-19 09:40:32.667930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.832 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.832 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:46.832 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:46.832 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:46.832 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:46.832 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.832 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:46.832 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:46.832 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:46.832 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Qd2 00:22:46.832 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:46.832 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Qd2 00:22:46.832 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Qd2 00:22:46.832 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Qd2 00:22:46.832 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:46.832 [2024-11-19 09:40:33.542628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.832 [2024-11-19 09:40:33.558621] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:46.832 [2024-11-19 09:40:33.558900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.093 malloc0 00:22:47.093 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.093 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=367792 00:22:47.093 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 367792 /var/tmp/bdevperf.sock 00:22:47.093 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:47.093 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 367792 ']' 00:22:47.093 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.093 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.093 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.093 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.093 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:47.093 [2024-11-19 09:40:33.703640] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:47.093 [2024-11-19 09:40:33.703723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid367792 ] 00:22:47.093 [2024-11-19 09:40:33.797546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.354 [2024-11-19 09:40:33.848410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.926 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:47.926 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:47.926 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Qd2 00:22:48.188 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:48.188 [2024-11-19 09:40:34.889727] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:48.449 TLSTESTn1 00:22:48.449 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:48.449 Running I/O for 10 seconds... 00:22:50.780 5284.00 IOPS, 20.64 MiB/s [2024-11-19T08:40:38.100Z] 5601.50 IOPS, 21.88 MiB/s [2024-11-19T08:40:39.486Z] 5039.00 IOPS, 19.68 MiB/s [2024-11-19T08:40:40.427Z] 4842.50 IOPS, 18.92 MiB/s [2024-11-19T08:40:41.368Z] 4864.20 IOPS, 19.00 MiB/s [2024-11-19T08:40:42.309Z] 4445.00 IOPS, 17.36 MiB/s [2024-11-19T08:40:43.250Z] 4396.86 IOPS, 17.18 MiB/s [2024-11-19T08:40:44.192Z] 4348.38 IOPS, 16.99 MiB/s [2024-11-19T08:40:45.135Z] 4559.78 IOPS, 17.81 MiB/s [2024-11-19T08:40:45.395Z] 4453.30 IOPS, 17.40 MiB/s 00:22:58.647 Latency(us) 00:22:58.647 [2024-11-19T08:40:45.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.647 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:58.647 Verification LBA range: start 0x0 length 0x2000 00:22:58.647 TLSTESTn1 : 10.09 4428.06 17.30 0.00 0.00 28797.03 5952.85 84759.89 00:22:58.647 [2024-11-19T08:40:45.395Z] =================================================================================================================== 00:22:58.647 [2024-11-19T08:40:45.395Z] Total : 4428.06 17.30 0.00 0.00 28797.03 5952.85 84759.89 00:22:58.647 { 00:22:58.647 "results": [ 00:22:58.647 { 00:22:58.647 "job": "TLSTESTn1", 00:22:58.647 "core_mask": "0x4", 00:22:58.647 "workload": "verify", 00:22:58.647 "status": "finished", 00:22:58.647 "verify_range": { 00:22:58.647 "start": 0, 00:22:58.647 "length": 8192 00:22:58.647 }, 00:22:58.647 "queue_depth": 128, 00:22:58.647 "io_size": 4096, 00:22:58.647 "runtime": 10.085686, 00:22:58.647 "iops": 4428.057744411238, 00:22:58.647 "mibps": 17.297100564106398, 00:22:58.647 "io_failed": 0, 00:22:58.647 "io_timeout": 0, 00:22:58.647 "avg_latency_us": 28797.025829526792, 00:22:58.647 "min_latency_us": 5952.8533333333335, 00:22:58.647 "max_latency_us": 84759.89333333333 00:22:58.647 } 00:22:58.647 ], 00:22:58.647 "core_count": 1 00:22:58.647 } 00:22:58.647 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:58.647 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:58.647 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:22:58.647 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:58.648 nvmf_trace.0 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 367792 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 367792 ']' 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 367792 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 367792 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 367792' 00:22:58.648 killing process with pid 367792 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 367792 00:22:58.648 Received shutdown signal, test time was about 10.000000 seconds 00:22:58.648 00:22:58.648 Latency(us) 00:22:58.648 [2024-11-19T08:40:45.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.648 [2024-11-19T08:40:45.396Z] =================================================================================================================== 00:22:58.648 [2024-11-19T08:40:45.396Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:58.648 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 367792 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:58.907 rmmod nvme_tcp 00:22:58.907 rmmod nvme_fabrics 00:22:58.907 rmmod nvme_keyring 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 367448 ']' 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 367448 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 367448 ']' 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 367448 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 367448 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 367448' 00:22:58.907 killing process with pid 367448 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 367448 00:22:58.907 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 367448 00:22:59.168 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:59.168 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:59.168 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:59.168 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:22:59.168 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:22:59.168 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:59.168 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:22:59.168 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:59.168 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:59.168 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.168 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.168 09:40:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.107 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:01.107 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Qd2 00:23:01.107 00:23:01.107 real 0m23.293s 00:23:01.107 user 0m25.494s 00:23:01.107 sys 0m9.283s 00:23:01.107 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:01.107 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:01.107 ************************************ 00:23:01.107 END TEST nvmf_fips 00:23:01.107 ************************************ 00:23:01.107 09:40:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:01.107 09:40:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:01.107 09:40:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:01.107 09:40:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:01.369 ************************************ 00:23:01.369 START TEST nvmf_control_msg_list 00:23:01.369 ************************************ 00:23:01.369 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:01.369 * Looking for test storage... 00:23:01.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:01.369 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:01.369 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:23:01.369 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:01.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.369 --rc genhtml_branch_coverage=1 00:23:01.369 --rc genhtml_function_coverage=1 00:23:01.369 --rc genhtml_legend=1 00:23:01.369 --rc geninfo_all_blocks=1 00:23:01.369 --rc geninfo_unexecuted_blocks=1 00:23:01.369 00:23:01.369 ' 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:01.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.369 --rc genhtml_branch_coverage=1 00:23:01.369 --rc genhtml_function_coverage=1 00:23:01.369 --rc genhtml_legend=1 00:23:01.369 --rc geninfo_all_blocks=1 00:23:01.369 --rc geninfo_unexecuted_blocks=1 00:23:01.369 00:23:01.369 ' 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:01.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.369 --rc genhtml_branch_coverage=1 00:23:01.369 --rc genhtml_function_coverage=1 00:23:01.369 --rc genhtml_legend=1 00:23:01.369 --rc geninfo_all_blocks=1 00:23:01.369 --rc geninfo_unexecuted_blocks=1 00:23:01.369 00:23:01.369 ' 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:01.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.369 --rc genhtml_branch_coverage=1 00:23:01.369 --rc genhtml_function_coverage=1 00:23:01.369 --rc genhtml_legend=1 00:23:01.369 --rc geninfo_all_blocks=1 00:23:01.369 --rc geninfo_unexecuted_blocks=1 00:23:01.369 00:23:01.369 ' 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:01.369 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:01.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:23:01.632 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:09.777 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:09.777 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:09.777 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.777 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:09.778 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:09.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:23:09.778 00:23:09.778 --- 10.0.0.2 ping statistics --- 00:23:09.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.778 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:23:09.778 00:23:09.778 --- 10.0.0.1 ping statistics --- 00:23:09.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.778 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=374268 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 374268 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 374268 ']' 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.778 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:09.778 [2024-11-19 09:40:55.687104] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:09.778 [2024-11-19 09:40:55.687180] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.778 [2024-11-19 09:40:55.788002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.778 [2024-11-19 09:40:55.839537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.778 [2024-11-19 09:40:55.839589] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.778 [2024-11-19 09:40:55.839598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.778 [2024-11-19 09:40:55.839605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.778 [2024-11-19 09:40:55.839612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.778 [2024-11-19 09:40:55.840370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.778 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.778 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:23:09.778 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.778 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:09.778 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:10.040 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.040 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:10.040 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:10.040 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:10.040 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:10.041 [2024-11-19 09:40:56.554533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:10.041 Malloc0 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:10.041 [2024-11-19 09:40:56.608977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=374498 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=374499 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=374500 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 374498 00:23:10.041 09:40:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:10.041 [2024-11-19 09:40:56.709680] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:10.041 [2024-11-19 09:40:56.719794] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:10.041 [2024-11-19 09:40:56.720217] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:11.427 Initializing NVMe Controllers 00:23:11.427 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:11.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:11.428 Initialization complete. Launching workers. 00:23:11.428 ======================================================== 00:23:11.428 Latency(us) 00:23:11.428 Device Information : IOPS MiB/s Average min max 00:23:11.428 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40928.32 40804.66 41512.87 00:23:11.428 ======================================================== 00:23:11.428 Total : 25.00 0.10 40928.32 40804.66 41512.87 00:23:11.428 00:23:11.428 Initializing NVMe Controllers 00:23:11.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:11.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:11.428 Initialization complete. Launching workers. 00:23:11.428 ======================================================== 00:23:11.428 Latency(us) 00:23:11.428 Device Information : IOPS MiB/s Average min max 00:23:11.428 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1475.00 5.76 678.13 415.31 853.88 00:23:11.428 ======================================================== 00:23:11.428 Total : 1475.00 5.76 678.13 415.31 853.88 00:23:11.428 00:23:11.428 Initializing NVMe Controllers 00:23:11.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:11.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:11.428 Initialization complete. Launching workers. 00:23:11.428 ======================================================== 00:23:11.428 Latency(us) 00:23:11.428 Device Information : IOPS MiB/s Average min max 00:23:11.428 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1568.00 6.12 637.77 155.91 840.17 00:23:11.428 ======================================================== 00:23:11.428 Total : 1568.00 6.12 637.77 155.91 840.17 00:23:11.428 00:23:11.428 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 374499 00:23:11.428 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 374500 00:23:11.428 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:11.428 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:11.428 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:11.428 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:11.428 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:11.428 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:11.428 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:11.428 09:40:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:11.428 rmmod nvme_tcp 00:23:11.428 rmmod nvme_fabrics 00:23:11.428 rmmod nvme_keyring 00:23:11.428 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:11.428 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:11.428 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:11.428 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 374268 ']' 00:23:11.428 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 374268 00:23:11.428 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 374268 ']' 00:23:11.428 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 374268 00:23:11.428 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:23:11.428 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.428 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 374268 00:23:11.428 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:11.428 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:11.428 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 374268' 00:23:11.428 killing process with pid 374268 00:23:11.428 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 374268 00:23:11.428 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 374268 00:23:11.689 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:11.689 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:11.689 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:11.689 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:11.689 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:23:11.689 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:11.689 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:23:11.689 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:11.689 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:11.689 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.689 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.689 09:40:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.616 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:13.616 00:23:13.616 real 0m12.461s 00:23:13.616 user 0m8.247s 00:23:13.616 sys 0m6.526s 00:23:13.616 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:13.616 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:13.616 ************************************ 00:23:13.616 END TEST nvmf_control_msg_list 00:23:13.616 ************************************ 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:13.876 ************************************ 00:23:13.876 START TEST nvmf_wait_for_buf 00:23:13.876 ************************************ 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:13.876 * Looking for test storage... 00:23:13.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:13.876 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:14.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.138 --rc genhtml_branch_coverage=1 00:23:14.138 --rc genhtml_function_coverage=1 00:23:14.138 --rc genhtml_legend=1 00:23:14.138 --rc geninfo_all_blocks=1 00:23:14.138 --rc geninfo_unexecuted_blocks=1 00:23:14.138 00:23:14.138 ' 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:14.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.138 --rc genhtml_branch_coverage=1 00:23:14.138 --rc genhtml_function_coverage=1 00:23:14.138 --rc genhtml_legend=1 00:23:14.138 --rc geninfo_all_blocks=1 00:23:14.138 --rc geninfo_unexecuted_blocks=1 00:23:14.138 00:23:14.138 ' 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:14.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.138 --rc genhtml_branch_coverage=1 00:23:14.138 --rc genhtml_function_coverage=1 00:23:14.138 --rc genhtml_legend=1 00:23:14.138 --rc geninfo_all_blocks=1 00:23:14.138 --rc geninfo_unexecuted_blocks=1 00:23:14.138 00:23:14.138 ' 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:14.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.138 --rc genhtml_branch_coverage=1 00:23:14.138 --rc genhtml_function_coverage=1 00:23:14.138 --rc genhtml_legend=1 00:23:14.138 --rc geninfo_all_blocks=1 00:23:14.138 --rc geninfo_unexecuted_blocks=1 00:23:14.138 00:23:14.138 ' 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.138 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:14.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:14.139 09:41:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:22.282 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:22.282 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:22.282 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:22.282 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:22.282 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:22.282 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:22.282 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:22.283 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:22.283 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:22.283 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:22.283 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:22.283 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:22.283 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:22.283 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:22.283 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:22.283 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:22.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:23:22.283 00:23:22.283 --- 10.0.0.2 ping statistics --- 00:23:22.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.283 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:23:22.283 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:22.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:23:22.283 00:23:22.283 --- 10.0.0.1 ping statistics --- 00:23:22.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.283 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:23:22.283 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.283 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:23:22.283 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=378939 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 378939 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 378939 ']' 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.284 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:22.284 [2024-11-19 09:41:08.237011] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:22.284 [2024-11-19 09:41:08.237085] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.284 [2024-11-19 09:41:08.337340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.284 [2024-11-19 09:41:08.387973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.284 [2024-11-19 09:41:08.388027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.284 [2024-11-19 09:41:08.388041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.284 [2024-11-19 09:41:08.388048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.284 [2024-11-19 09:41:08.388053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.284 [2024-11-19 09:41:08.388802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:22.547 Malloc0 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.547 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:23:22.548 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.548 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:22.548 [2024-11-19 09:41:09.212832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.548 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.548 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:22.548 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.548 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:22.548 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.548 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:22.548 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.548 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:22.548 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.548 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:22.548 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.548 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:22.548 [2024-11-19 09:41:09.249187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.548 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.548 09:41:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:22.811 [2024-11-19 09:41:09.354276] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:24.197 Initializing NVMe Controllers 00:23:24.197 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:24.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:23:24.197 Initialization complete. Launching workers. 00:23:24.197 ======================================================== 00:23:24.197 Latency(us) 00:23:24.197 Device Information : IOPS MiB/s Average min max 00:23:24.197 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.54 15.94 32486.16 8010.40 63979.11 00:23:24.197 ======================================================== 00:23:24.197 Total : 127.54 15.94 32486.16 8010.40 63979.11 00:23:24.197 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:24.197 rmmod nvme_tcp 00:23:24.197 rmmod nvme_fabrics 00:23:24.197 rmmod nvme_keyring 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 378939 ']' 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 378939 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 378939 ']' 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 378939 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.197 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 378939 00:23:24.458 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:24.458 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:24.458 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 378939' 00:23:24.458 killing process with pid 378939 00:23:24.458 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 378939 00:23:24.458 09:41:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 378939 00:23:24.458 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:24.458 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:24.458 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:24.458 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:23:24.458 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:23:24.458 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:24.458 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:23:24.458 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:24.458 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:24.458 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.458 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.458 09:41:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.003 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:27.003 00:23:27.003 real 0m12.744s 00:23:27.003 user 0m5.162s 00:23:27.003 sys 0m6.173s 00:23:27.003 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:27.003 09:41:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:27.003 ************************************ 00:23:27.003 END TEST nvmf_wait_for_buf 00:23:27.003 ************************************ 00:23:27.003 09:41:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:23:27.003 09:41:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:23:27.003 09:41:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:23:27.003 09:41:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:23:27.003 09:41:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:23:27.003 09:41:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:33.592 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:33.592 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:33.592 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:33.592 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:33.592 09:41:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:33.854 ************************************ 00:23:33.854 START TEST nvmf_perf_adq 00:23:33.854 ************************************ 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:33.854 * Looking for test storage... 00:23:33.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:33.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.854 --rc genhtml_branch_coverage=1 00:23:33.854 --rc genhtml_function_coverage=1 00:23:33.854 --rc genhtml_legend=1 00:23:33.854 --rc geninfo_all_blocks=1 00:23:33.854 --rc geninfo_unexecuted_blocks=1 00:23:33.854 00:23:33.854 ' 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:33.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.854 --rc genhtml_branch_coverage=1 00:23:33.854 --rc genhtml_function_coverage=1 00:23:33.854 --rc genhtml_legend=1 00:23:33.854 --rc geninfo_all_blocks=1 00:23:33.854 --rc geninfo_unexecuted_blocks=1 00:23:33.854 00:23:33.854 ' 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:33.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.854 --rc genhtml_branch_coverage=1 00:23:33.854 --rc genhtml_function_coverage=1 00:23:33.854 --rc genhtml_legend=1 00:23:33.854 --rc geninfo_all_blocks=1 00:23:33.854 --rc geninfo_unexecuted_blocks=1 00:23:33.854 00:23:33.854 ' 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:33.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.854 --rc genhtml_branch_coverage=1 00:23:33.854 --rc genhtml_function_coverage=1 00:23:33.854 --rc genhtml_legend=1 00:23:33.854 --rc geninfo_all_blocks=1 00:23:33.854 --rc geninfo_unexecuted_blocks=1 00:23:33.854 00:23:33.854 ' 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.854 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:33.855 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:41.995 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:41.995 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:41.995 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:41.995 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:41.995 09:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:42.566 09:41:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:46.777 09:41:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:50.984 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:50.984 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.984 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:50.985 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:50.985 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:50.985 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:51.246 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.246 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.246 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.246 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.246 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:51.246 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.246 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.507 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.507 09:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:51.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:23:51.507 00:23:51.507 --- 10.0.0.2 ping statistics --- 00:23:51.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.507 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:23:51.507 00:23:51.507 --- 10.0.0.1 ping statistics --- 00:23:51.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.507 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=389404 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 389404 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 389404 ']' 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.507 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:51.507 [2024-11-19 09:41:38.130981] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:51.507 [2024-11-19 09:41:38.131049] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.507 [2024-11-19 09:41:38.229593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:51.769 [2024-11-19 09:41:38.283217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.769 [2024-11-19 09:41:38.283283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.769 [2024-11-19 09:41:38.283291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.769 [2024-11-19 09:41:38.283299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.769 [2024-11-19 09:41:38.283305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.769 [2024-11-19 09:41:38.285665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.769 [2024-11-19 09:41:38.285826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.769 [2024-11-19 09:41:38.285987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.769 [2024-11-19 09:41:38.285988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:52.341 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.342 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:52.342 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.342 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.342 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.342 09:41:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.342 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:23:52.342 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:52.342 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:52.342 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.342 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.342 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.342 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:52.342 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:52.342 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.342 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.342 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.342 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:52.342 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.342 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.603 [2024-11-19 09:41:39.157953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.603 Malloc1 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.603 [2024-11-19 09:41:39.232191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=389757 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:23:52.603 09:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:54.516 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:23:54.516 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.516 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:54.776 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.776 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:23:54.776 "tick_rate": 2400000000, 00:23:54.776 "poll_groups": [ 00:23:54.776 { 00:23:54.776 "name": "nvmf_tgt_poll_group_000", 00:23:54.776 "admin_qpairs": 1, 00:23:54.776 "io_qpairs": 1, 00:23:54.776 "current_admin_qpairs": 1, 00:23:54.776 "current_io_qpairs": 1, 00:23:54.776 "pending_bdev_io": 0, 00:23:54.776 "completed_nvme_io": 17177, 00:23:54.776 "transports": [ 00:23:54.776 { 00:23:54.776 "trtype": "TCP" 00:23:54.776 } 00:23:54.776 ] 00:23:54.776 }, 00:23:54.776 { 00:23:54.776 "name": "nvmf_tgt_poll_group_001", 00:23:54.776 "admin_qpairs": 0, 00:23:54.776 "io_qpairs": 1, 00:23:54.776 "current_admin_qpairs": 0, 00:23:54.776 "current_io_qpairs": 1, 00:23:54.776 "pending_bdev_io": 0, 00:23:54.776 "completed_nvme_io": 19018, 00:23:54.776 "transports": [ 00:23:54.776 { 00:23:54.776 "trtype": "TCP" 00:23:54.776 } 00:23:54.776 ] 00:23:54.776 }, 00:23:54.776 { 00:23:54.776 "name": "nvmf_tgt_poll_group_002", 00:23:54.776 "admin_qpairs": 0, 00:23:54.776 "io_qpairs": 1, 00:23:54.776 "current_admin_qpairs": 0, 00:23:54.776 "current_io_qpairs": 1, 00:23:54.776 "pending_bdev_io": 0, 00:23:54.776 "completed_nvme_io": 19247, 00:23:54.776 "transports": [ 00:23:54.776 { 00:23:54.776 "trtype": "TCP" 00:23:54.776 } 00:23:54.776 ] 00:23:54.776 }, 00:23:54.776 { 00:23:54.776 "name": "nvmf_tgt_poll_group_003", 00:23:54.776 "admin_qpairs": 0, 00:23:54.776 "io_qpairs": 1, 00:23:54.776 "current_admin_qpairs": 0, 00:23:54.776 "current_io_qpairs": 1, 00:23:54.776 "pending_bdev_io": 0, 00:23:54.776 "completed_nvme_io": 16834, 00:23:54.776 "transports": [ 00:23:54.776 { 00:23:54.776 "trtype": "TCP" 00:23:54.776 } 00:23:54.776 ] 00:23:54.776 } 00:23:54.776 ] 00:23:54.776 }' 00:23:54.776 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:54.776 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:23:54.776 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:23:54.776 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:23:54.776 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 389757 00:24:02.915 Initializing NVMe Controllers 00:24:02.915 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:02.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:02.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:02.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:02.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:02.915 Initialization complete. Launching workers. 00:24:02.915 ======================================================== 00:24:02.915 Latency(us) 00:24:02.915 Device Information : IOPS MiB/s Average min max 00:24:02.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12613.40 49.27 5074.95 1342.29 12852.39 00:24:02.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13427.50 52.45 4766.62 1234.38 13739.42 00:24:02.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14078.40 54.99 4546.50 1426.65 13553.19 00:24:02.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13194.70 51.54 4850.95 1261.55 13013.59 00:24:02.915 ======================================================== 00:24:02.915 Total : 53313.99 208.26 4802.31 1234.38 13739.42 00:24:02.915 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:02.915 rmmod nvme_tcp 00:24:02.915 rmmod nvme_fabrics 00:24:02.915 rmmod nvme_keyring 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 389404 ']' 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 389404 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 389404 ']' 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 389404 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 389404 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 389404' 00:24:02.915 killing process with pid 389404 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 389404 00:24:02.915 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 389404 00:24:03.176 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:03.176 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:03.176 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:03.176 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:03.176 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:24:03.176 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:03.176 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:24:03.176 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:03.176 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:03.176 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.176 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.176 09:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.089 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:05.089 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:24:05.089 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:05.089 09:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:07.001 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:24:08.944 09:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:14.242 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:14.243 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:14.243 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:14.243 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:14.243 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:14.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:24:14.243 00:24:14.243 --- 10.0.0.2 ping statistics --- 00:24:14.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.243 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:24:14.243 00:24:14.243 --- 10.0.0.1 ping statistics --- 00:24:14.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.243 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:14.243 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:14.244 net.core.busy_poll = 1 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:14.244 net.core.busy_read = 1 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=394280 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 394280 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 394280 ']' 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:14.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:14.504 [2024-11-19 09:42:01.034323] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:14.504 [2024-11-19 09:42:01.034372] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.504 [2024-11-19 09:42:01.129528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:14.504 [2024-11-19 09:42:01.169221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.504 [2024-11-19 09:42:01.169266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.504 [2024-11-19 09:42:01.169275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.504 [2024-11-19 09:42:01.169283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.504 [2024-11-19 09:42:01.169289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.504 [2024-11-19 09:42:01.170975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.504 [2024-11-19 09:42:01.171126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.504 [2024-11-19 09:42:01.171279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:14.505 [2024-11-19 09:42:01.171410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:15.446 [2024-11-19 09:42:01.977082] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.446 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:15.446 Malloc1 00:24:15.447 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.447 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:15.447 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.447 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:15.447 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.447 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:15.447 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.447 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:15.447 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.447 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:15.447 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.447 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:15.447 [2024-11-19 09:42:02.048518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.447 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.447 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=394684 00:24:15.447 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:24:15.447 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:17.358 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:24:17.358 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.358 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:17.358 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.358 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:24:17.358 "tick_rate": 2400000000, 00:24:17.358 "poll_groups": [ 00:24:17.358 { 00:24:17.358 "name": "nvmf_tgt_poll_group_000", 00:24:17.358 "admin_qpairs": 1, 00:24:17.358 "io_qpairs": 4, 00:24:17.358 "current_admin_qpairs": 1, 00:24:17.358 "current_io_qpairs": 4, 00:24:17.358 "pending_bdev_io": 0, 00:24:17.358 "completed_nvme_io": 35289, 00:24:17.358 "transports": [ 00:24:17.358 { 00:24:17.358 "trtype": "TCP" 00:24:17.358 } 00:24:17.358 ] 00:24:17.358 }, 00:24:17.358 { 00:24:17.358 "name": "nvmf_tgt_poll_group_001", 00:24:17.358 "admin_qpairs": 0, 00:24:17.358 "io_qpairs": 0, 00:24:17.358 "current_admin_qpairs": 0, 00:24:17.358 "current_io_qpairs": 0, 00:24:17.358 "pending_bdev_io": 0, 00:24:17.358 "completed_nvme_io": 0, 00:24:17.358 "transports": [ 00:24:17.358 { 00:24:17.358 "trtype": "TCP" 00:24:17.358 } 00:24:17.358 ] 00:24:17.358 }, 00:24:17.358 { 00:24:17.358 "name": "nvmf_tgt_poll_group_002", 00:24:17.358 "admin_qpairs": 0, 00:24:17.358 "io_qpairs": 0, 00:24:17.358 "current_admin_qpairs": 0, 00:24:17.358 "current_io_qpairs": 0, 00:24:17.358 "pending_bdev_io": 0, 00:24:17.358 "completed_nvme_io": 0, 00:24:17.358 "transports": [ 00:24:17.358 { 00:24:17.358 "trtype": "TCP" 00:24:17.358 } 00:24:17.358 ] 00:24:17.358 }, 00:24:17.358 { 00:24:17.358 "name": "nvmf_tgt_poll_group_003", 00:24:17.358 "admin_qpairs": 0, 00:24:17.358 "io_qpairs": 0, 00:24:17.358 "current_admin_qpairs": 0, 00:24:17.358 "current_io_qpairs": 0, 00:24:17.358 "pending_bdev_io": 0, 00:24:17.358 "completed_nvme_io": 0, 00:24:17.358 "transports": [ 00:24:17.358 { 00:24:17.358 "trtype": "TCP" 00:24:17.358 } 00:24:17.358 ] 00:24:17.358 } 00:24:17.358 ] 00:24:17.358 }' 00:24:17.358 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:17.358 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:24:17.618 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:24:17.618 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:24:17.618 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 394684 00:24:25.757 Initializing NVMe Controllers 00:24:25.757 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:25.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:25.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:25.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:25.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:25.757 Initialization complete. Launching workers. 00:24:25.757 ======================================================== 00:24:25.757 Latency(us) 00:24:25.757 Device Information : IOPS MiB/s Average min max 00:24:25.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6873.00 26.85 9313.49 1319.84 61081.03 00:24:25.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5525.60 21.58 11586.46 1403.44 58875.11 00:24:25.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5531.20 21.61 11573.99 1397.48 59106.07 00:24:25.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7007.70 27.37 9150.03 1006.86 57834.06 00:24:25.757 ======================================================== 00:24:25.757 Total : 24937.50 97.41 10272.58 1006.86 61081.03 00:24:25.757 00:24:25.757 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:24:25.757 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:25.757 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:25.757 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:25.757 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:25.757 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:25.758 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:25.758 rmmod nvme_tcp 00:24:25.758 rmmod nvme_fabrics 00:24:25.758 rmmod nvme_keyring 00:24:25.758 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:25.758 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:25.758 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:25.758 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 394280 ']' 00:24:25.758 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 394280 00:24:25.758 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 394280 ']' 00:24:25.758 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 394280 00:24:25.758 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:24:25.758 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.758 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394280 00:24:25.758 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:25.758 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:25.758 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394280' 00:24:25.758 killing process with pid 394280 00:24:25.758 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 394280 00:24:25.758 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 394280 00:24:26.019 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:26.019 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:26.019 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:26.019 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:26.019 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:24:26.019 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:26.019 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:24:26.019 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:26.019 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:26.019 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.019 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.019 09:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.934 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:27.934 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:24:27.934 00:24:27.934 real 0m54.251s 00:24:27.934 user 2m50.345s 00:24:27.934 sys 0m12.148s 00:24:27.934 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:27.934 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.934 ************************************ 00:24:27.934 END TEST nvmf_perf_adq 00:24:27.934 ************************************ 00:24:27.934 09:42:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:27.934 09:42:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:27.934 09:42:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:27.934 09:42:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:27.934 ************************************ 00:24:27.934 START TEST nvmf_shutdown 00:24:27.934 ************************************ 00:24:27.934 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:28.196 * Looking for test storage... 00:24:28.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:28.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.196 --rc genhtml_branch_coverage=1 00:24:28.196 --rc genhtml_function_coverage=1 00:24:28.196 --rc genhtml_legend=1 00:24:28.196 --rc geninfo_all_blocks=1 00:24:28.196 --rc geninfo_unexecuted_blocks=1 00:24:28.196 00:24:28.196 ' 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:28.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.196 --rc genhtml_branch_coverage=1 00:24:28.196 --rc genhtml_function_coverage=1 00:24:28.196 --rc genhtml_legend=1 00:24:28.196 --rc geninfo_all_blocks=1 00:24:28.196 --rc geninfo_unexecuted_blocks=1 00:24:28.196 00:24:28.196 ' 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:28.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.196 --rc genhtml_branch_coverage=1 00:24:28.196 --rc genhtml_function_coverage=1 00:24:28.196 --rc genhtml_legend=1 00:24:28.196 --rc geninfo_all_blocks=1 00:24:28.196 --rc geninfo_unexecuted_blocks=1 00:24:28.196 00:24:28.196 ' 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:28.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.196 --rc genhtml_branch_coverage=1 00:24:28.196 --rc genhtml_function_coverage=1 00:24:28.196 --rc genhtml_legend=1 00:24:28.196 --rc geninfo_all_blocks=1 00:24:28.196 --rc geninfo_unexecuted_blocks=1 00:24:28.196 00:24:28.196 ' 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.196 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.197 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:28.458 ************************************ 00:24:28.458 START TEST nvmf_shutdown_tc1 00:24:28.458 ************************************ 00:24:28.458 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:24:28.458 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:24:28.458 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:28.458 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:28.458 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.458 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:28.458 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:28.458 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:28.458 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.458 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.458 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.458 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:28.458 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:28.459 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:28.459 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:36.602 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:36.602 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:36.602 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:36.602 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:36.602 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:36.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:24:36.603 00:24:36.603 --- 10.0.0.2 ping statistics --- 00:24:36.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.603 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:36.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:24:36.603 00:24:36.603 --- 10.0.0.1 ping statistics --- 00:24:36.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.603 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=401291 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 401291 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 401291 ']' 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.603 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:36.603 [2024-11-19 09:42:22.542628] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:36.603 [2024-11-19 09:42:22.542695] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.603 [2024-11-19 09:42:22.645648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:36.603 [2024-11-19 09:42:22.698342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.603 [2024-11-19 09:42:22.698393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.603 [2024-11-19 09:42:22.698402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.603 [2024-11-19 09:42:22.698409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.603 [2024-11-19 09:42:22.698421] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.603 [2024-11-19 09:42:22.700409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.603 [2024-11-19 09:42:22.700572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:36.603 [2024-11-19 09:42:22.700982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:36.603 [2024-11-19 09:42:22.700985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.864 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.864 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:24:36.864 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:36.864 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:36.864 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:36.864 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:36.865 [2024-11-19 09:42:23.425334] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.865 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:36.865 Malloc1 00:24:36.865 [2024-11-19 09:42:23.549751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.865 Malloc2 00:24:37.126 Malloc3 00:24:37.126 Malloc4 00:24:37.126 Malloc5 00:24:37.126 Malloc6 00:24:37.126 Malloc7 00:24:37.126 Malloc8 00:24:37.388 Malloc9 00:24:37.388 Malloc10 00:24:37.388 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.388 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:37.388 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:37.388 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:37.388 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=401673 00:24:37.388 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 401673 /var/tmp/bdevperf.sock 00:24:37.388 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 401673 ']' 00:24:37.388 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:37.388 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.388 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:37.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:37.388 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:37.388 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.388 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:37.388 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:37.388 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:37.388 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:37.388 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:37.388 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:37.388 { 00:24:37.388 "params": { 00:24:37.388 "name": "Nvme$subsystem", 00:24:37.388 "trtype": "$TEST_TRANSPORT", 00:24:37.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.388 "adrfam": "ipv4", 00:24:37.388 "trsvcid": "$NVMF_PORT", 00:24:37.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.388 "hdgst": ${hdgst:-false}, 00:24:37.388 "ddgst": ${ddgst:-false} 00:24:37.388 }, 00:24:37.388 "method": "bdev_nvme_attach_controller" 00:24:37.388 } 00:24:37.388 EOF 00:24:37.388 )") 00:24:37.388 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:37.388 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:37.388 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:37.388 { 00:24:37.388 "params": { 00:24:37.388 "name": "Nvme$subsystem", 00:24:37.388 "trtype": "$TEST_TRANSPORT", 00:24:37.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.389 "adrfam": "ipv4", 00:24:37.389 "trsvcid": "$NVMF_PORT", 00:24:37.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.389 "hdgst": ${hdgst:-false}, 00:24:37.389 "ddgst": ${ddgst:-false} 00:24:37.389 }, 00:24:37.389 "method": "bdev_nvme_attach_controller" 00:24:37.389 } 00:24:37.389 EOF 00:24:37.389 )") 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:37.389 { 00:24:37.389 "params": { 00:24:37.389 "name": "Nvme$subsystem", 00:24:37.389 "trtype": "$TEST_TRANSPORT", 00:24:37.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.389 "adrfam": "ipv4", 00:24:37.389 "trsvcid": "$NVMF_PORT", 00:24:37.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.389 "hdgst": ${hdgst:-false}, 00:24:37.389 "ddgst": ${ddgst:-false} 00:24:37.389 }, 00:24:37.389 "method": "bdev_nvme_attach_controller" 00:24:37.389 } 00:24:37.389 EOF 00:24:37.389 )") 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:37.389 { 00:24:37.389 "params": { 00:24:37.389 "name": "Nvme$subsystem", 00:24:37.389 "trtype": "$TEST_TRANSPORT", 00:24:37.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.389 "adrfam": "ipv4", 00:24:37.389 "trsvcid": "$NVMF_PORT", 00:24:37.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.389 "hdgst": ${hdgst:-false}, 00:24:37.389 "ddgst": ${ddgst:-false} 00:24:37.389 }, 00:24:37.389 "method": "bdev_nvme_attach_controller" 00:24:37.389 } 00:24:37.389 EOF 00:24:37.389 )") 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:37.389 { 00:24:37.389 "params": { 00:24:37.389 "name": "Nvme$subsystem", 00:24:37.389 "trtype": "$TEST_TRANSPORT", 00:24:37.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.389 "adrfam": "ipv4", 00:24:37.389 "trsvcid": "$NVMF_PORT", 00:24:37.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.389 "hdgst": ${hdgst:-false}, 00:24:37.389 "ddgst": ${ddgst:-false} 00:24:37.389 }, 00:24:37.389 "method": "bdev_nvme_attach_controller" 00:24:37.389 } 00:24:37.389 EOF 00:24:37.389 )") 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:37.389 { 00:24:37.389 "params": { 00:24:37.389 "name": "Nvme$subsystem", 00:24:37.389 "trtype": "$TEST_TRANSPORT", 00:24:37.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.389 "adrfam": "ipv4", 00:24:37.389 "trsvcid": "$NVMF_PORT", 00:24:37.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.389 "hdgst": ${hdgst:-false}, 00:24:37.389 "ddgst": ${ddgst:-false} 00:24:37.389 }, 00:24:37.389 "method": "bdev_nvme_attach_controller" 00:24:37.389 } 00:24:37.389 EOF 00:24:37.389 )") 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:37.389 [2024-11-19 09:42:24.065927] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:37.389 [2024-11-19 09:42:24.065997] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:37.389 { 00:24:37.389 "params": { 00:24:37.389 "name": "Nvme$subsystem", 00:24:37.389 "trtype": "$TEST_TRANSPORT", 00:24:37.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.389 "adrfam": "ipv4", 00:24:37.389 "trsvcid": "$NVMF_PORT", 00:24:37.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.389 "hdgst": ${hdgst:-false}, 00:24:37.389 "ddgst": ${ddgst:-false} 00:24:37.389 }, 00:24:37.389 "method": "bdev_nvme_attach_controller" 00:24:37.389 } 00:24:37.389 EOF 00:24:37.389 )") 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:37.389 { 00:24:37.389 "params": { 00:24:37.389 "name": "Nvme$subsystem", 00:24:37.389 "trtype": "$TEST_TRANSPORT", 00:24:37.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.389 "adrfam": "ipv4", 00:24:37.389 "trsvcid": "$NVMF_PORT", 00:24:37.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.389 "hdgst": ${hdgst:-false}, 00:24:37.389 "ddgst": ${ddgst:-false} 00:24:37.389 }, 00:24:37.389 "method": "bdev_nvme_attach_controller" 00:24:37.389 } 00:24:37.389 EOF 00:24:37.389 )") 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:37.389 { 00:24:37.389 "params": { 00:24:37.389 "name": "Nvme$subsystem", 00:24:37.389 "trtype": "$TEST_TRANSPORT", 00:24:37.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.389 "adrfam": "ipv4", 00:24:37.389 "trsvcid": "$NVMF_PORT", 00:24:37.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.389 "hdgst": ${hdgst:-false}, 00:24:37.389 "ddgst": ${ddgst:-false} 00:24:37.389 }, 00:24:37.389 "method": "bdev_nvme_attach_controller" 00:24:37.389 } 00:24:37.389 EOF 00:24:37.389 )") 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:37.389 { 00:24:37.389 "params": { 00:24:37.389 "name": "Nvme$subsystem", 00:24:37.389 "trtype": "$TEST_TRANSPORT", 00:24:37.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.389 "adrfam": "ipv4", 00:24:37.389 "trsvcid": "$NVMF_PORT", 00:24:37.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.389 "hdgst": ${hdgst:-false}, 00:24:37.389 "ddgst": ${ddgst:-false} 00:24:37.389 }, 00:24:37.389 "method": "bdev_nvme_attach_controller" 00:24:37.389 } 00:24:37.389 EOF 00:24:37.389 )") 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:37.389 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:37.389 "params": { 00:24:37.389 "name": "Nvme1", 00:24:37.389 "trtype": "tcp", 00:24:37.389 "traddr": "10.0.0.2", 00:24:37.389 "adrfam": "ipv4", 00:24:37.389 "trsvcid": "4420", 00:24:37.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:37.389 "hdgst": false, 00:24:37.389 "ddgst": false 00:24:37.389 }, 00:24:37.389 "method": "bdev_nvme_attach_controller" 00:24:37.389 },{ 00:24:37.389 "params": { 00:24:37.389 "name": "Nvme2", 00:24:37.389 "trtype": "tcp", 00:24:37.389 "traddr": "10.0.0.2", 00:24:37.389 "adrfam": "ipv4", 00:24:37.389 "trsvcid": "4420", 00:24:37.389 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:37.389 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:37.389 "hdgst": false, 00:24:37.389 "ddgst": false 00:24:37.389 }, 00:24:37.389 "method": "bdev_nvme_attach_controller" 00:24:37.389 },{ 00:24:37.389 "params": { 00:24:37.389 "name": "Nvme3", 00:24:37.389 "trtype": "tcp", 00:24:37.389 "traddr": "10.0.0.2", 00:24:37.389 "adrfam": "ipv4", 00:24:37.389 "trsvcid": "4420", 00:24:37.389 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:37.389 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:37.389 "hdgst": false, 00:24:37.390 "ddgst": false 00:24:37.390 }, 00:24:37.390 "method": "bdev_nvme_attach_controller" 00:24:37.390 },{ 00:24:37.390 "params": { 00:24:37.390 "name": "Nvme4", 00:24:37.390 "trtype": "tcp", 00:24:37.390 "traddr": "10.0.0.2", 00:24:37.390 "adrfam": "ipv4", 00:24:37.390 "trsvcid": "4420", 00:24:37.390 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:37.390 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:37.390 "hdgst": false, 00:24:37.390 "ddgst": false 00:24:37.390 }, 00:24:37.390 "method": "bdev_nvme_attach_controller" 00:24:37.390 },{ 00:24:37.390 "params": { 00:24:37.390 "name": "Nvme5", 00:24:37.390 "trtype": "tcp", 00:24:37.390 "traddr": "10.0.0.2", 00:24:37.390 "adrfam": "ipv4", 00:24:37.390 "trsvcid": "4420", 00:24:37.390 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:37.390 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:37.390 "hdgst": false, 00:24:37.390 "ddgst": false 00:24:37.390 }, 00:24:37.390 "method": "bdev_nvme_attach_controller" 00:24:37.390 },{ 00:24:37.390 "params": { 00:24:37.390 "name": "Nvme6", 00:24:37.390 "trtype": "tcp", 00:24:37.390 "traddr": "10.0.0.2", 00:24:37.390 "adrfam": "ipv4", 00:24:37.390 "trsvcid": "4420", 00:24:37.390 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:37.390 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:37.390 "hdgst": false, 00:24:37.390 "ddgst": false 00:24:37.390 }, 00:24:37.390 "method": "bdev_nvme_attach_controller" 00:24:37.390 },{ 00:24:37.390 "params": { 00:24:37.390 "name": "Nvme7", 00:24:37.390 "trtype": "tcp", 00:24:37.390 "traddr": "10.0.0.2", 00:24:37.390 "adrfam": "ipv4", 00:24:37.390 "trsvcid": "4420", 00:24:37.390 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:37.390 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:37.390 "hdgst": false, 00:24:37.390 "ddgst": false 00:24:37.390 }, 00:24:37.390 "method": "bdev_nvme_attach_controller" 00:24:37.390 },{ 00:24:37.390 "params": { 00:24:37.390 "name": "Nvme8", 00:24:37.390 "trtype": "tcp", 00:24:37.390 "traddr": "10.0.0.2", 00:24:37.390 "adrfam": "ipv4", 00:24:37.390 "trsvcid": "4420", 00:24:37.390 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:37.390 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:37.390 "hdgst": false, 00:24:37.390 "ddgst": false 00:24:37.390 }, 00:24:37.390 "method": "bdev_nvme_attach_controller" 00:24:37.390 },{ 00:24:37.390 "params": { 00:24:37.390 "name": "Nvme9", 00:24:37.390 "trtype": "tcp", 00:24:37.390 "traddr": "10.0.0.2", 00:24:37.390 "adrfam": "ipv4", 00:24:37.390 "trsvcid": "4420", 00:24:37.390 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:37.390 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:37.390 "hdgst": false, 00:24:37.390 "ddgst": false 00:24:37.390 }, 00:24:37.390 "method": "bdev_nvme_attach_controller" 00:24:37.390 },{ 00:24:37.390 "params": { 00:24:37.390 "name": "Nvme10", 00:24:37.390 "trtype": "tcp", 00:24:37.390 "traddr": "10.0.0.2", 00:24:37.390 "adrfam": "ipv4", 00:24:37.390 "trsvcid": "4420", 00:24:37.390 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:37.390 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:37.390 "hdgst": false, 00:24:37.390 "ddgst": false 00:24:37.390 }, 00:24:37.390 "method": "bdev_nvme_attach_controller" 00:24:37.390 }' 00:24:37.653 [2024-11-19 09:42:24.161826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.653 [2024-11-19 09:42:24.215383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.039 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.039 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:24:39.039 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:39.039 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.039 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:39.039 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.039 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 401673 00:24:39.039 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:24:39.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 401673 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:39.039 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 401291 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.984 { 00:24:39.984 "params": { 00:24:39.984 "name": "Nvme$subsystem", 00:24:39.984 "trtype": "$TEST_TRANSPORT", 00:24:39.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.984 "adrfam": "ipv4", 00:24:39.984 "trsvcid": "$NVMF_PORT", 00:24:39.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.984 "hdgst": ${hdgst:-false}, 00:24:39.984 "ddgst": ${ddgst:-false} 00:24:39.984 }, 00:24:39.984 "method": "bdev_nvme_attach_controller" 00:24:39.984 } 00:24:39.984 EOF 00:24:39.984 )") 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.984 { 00:24:39.984 "params": { 00:24:39.984 "name": "Nvme$subsystem", 00:24:39.984 "trtype": "$TEST_TRANSPORT", 00:24:39.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.984 "adrfam": "ipv4", 00:24:39.984 "trsvcid": "$NVMF_PORT", 00:24:39.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.984 "hdgst": ${hdgst:-false}, 00:24:39.984 "ddgst": ${ddgst:-false} 00:24:39.984 }, 00:24:39.984 "method": "bdev_nvme_attach_controller" 00:24:39.984 } 00:24:39.984 EOF 00:24:39.984 )") 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.984 { 00:24:39.984 "params": { 00:24:39.984 "name": "Nvme$subsystem", 00:24:39.984 "trtype": "$TEST_TRANSPORT", 00:24:39.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.984 "adrfam": "ipv4", 00:24:39.984 "trsvcid": "$NVMF_PORT", 00:24:39.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.984 "hdgst": ${hdgst:-false}, 00:24:39.984 "ddgst": ${ddgst:-false} 00:24:39.984 }, 00:24:39.984 "method": "bdev_nvme_attach_controller" 00:24:39.984 } 00:24:39.984 EOF 00:24:39.984 )") 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.984 { 00:24:39.984 "params": { 00:24:39.984 "name": "Nvme$subsystem", 00:24:39.984 "trtype": "$TEST_TRANSPORT", 00:24:39.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.984 "adrfam": "ipv4", 00:24:39.984 "trsvcid": "$NVMF_PORT", 00:24:39.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.984 "hdgst": ${hdgst:-false}, 00:24:39.984 "ddgst": ${ddgst:-false} 00:24:39.984 }, 00:24:39.984 "method": "bdev_nvme_attach_controller" 00:24:39.984 } 00:24:39.984 EOF 00:24:39.984 )") 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.984 { 00:24:39.984 "params": { 00:24:39.984 "name": "Nvme$subsystem", 00:24:39.984 "trtype": "$TEST_TRANSPORT", 00:24:39.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.984 "adrfam": "ipv4", 00:24:39.984 "trsvcid": "$NVMF_PORT", 00:24:39.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.984 "hdgst": ${hdgst:-false}, 00:24:39.984 "ddgst": ${ddgst:-false} 00:24:39.984 }, 00:24:39.984 "method": "bdev_nvme_attach_controller" 00:24:39.984 } 00:24:39.984 EOF 00:24:39.984 )") 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.984 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.984 { 00:24:39.984 "params": { 00:24:39.984 "name": "Nvme$subsystem", 00:24:39.984 "trtype": "$TEST_TRANSPORT", 00:24:39.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.985 "adrfam": "ipv4", 00:24:39.985 "trsvcid": "$NVMF_PORT", 00:24:39.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.985 "hdgst": ${hdgst:-false}, 00:24:39.985 "ddgst": ${ddgst:-false} 00:24:39.985 }, 00:24:39.985 "method": "bdev_nvme_attach_controller" 00:24:39.985 } 00:24:39.985 EOF 00:24:39.985 )") 00:24:39.985 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:39.985 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.985 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.985 { 00:24:39.985 "params": { 00:24:39.985 "name": "Nvme$subsystem", 00:24:39.985 "trtype": "$TEST_TRANSPORT", 00:24:39.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.985 "adrfam": "ipv4", 00:24:39.985 "trsvcid": "$NVMF_PORT", 00:24:39.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.985 "hdgst": ${hdgst:-false}, 00:24:39.985 "ddgst": ${ddgst:-false} 00:24:39.985 }, 00:24:39.985 "method": "bdev_nvme_attach_controller" 00:24:39.985 } 00:24:39.985 EOF 00:24:39.985 )") 00:24:39.985 [2024-11-19 09:42:26.689806] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:39.985 [2024-11-19 09:42:26.689857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid402302 ] 00:24:39.985 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:39.985 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.985 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.985 { 00:24:39.985 "params": { 00:24:39.985 "name": "Nvme$subsystem", 00:24:39.985 "trtype": "$TEST_TRANSPORT", 00:24:39.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.985 "adrfam": "ipv4", 00:24:39.985 "trsvcid": "$NVMF_PORT", 00:24:39.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.985 "hdgst": ${hdgst:-false}, 00:24:39.985 "ddgst": ${ddgst:-false} 00:24:39.985 }, 00:24:39.985 "method": "bdev_nvme_attach_controller" 00:24:39.985 } 00:24:39.985 EOF 00:24:39.985 )") 00:24:39.985 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:39.985 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.985 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.985 { 00:24:39.985 "params": { 00:24:39.985 "name": "Nvme$subsystem", 00:24:39.985 "trtype": "$TEST_TRANSPORT", 00:24:39.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.985 "adrfam": "ipv4", 00:24:39.985 "trsvcid": "$NVMF_PORT", 00:24:39.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.985 "hdgst": ${hdgst:-false}, 00:24:39.985 "ddgst": ${ddgst:-false} 00:24:39.985 }, 00:24:39.985 "method": "bdev_nvme_attach_controller" 00:24:39.985 } 00:24:39.985 EOF 00:24:39.985 )") 00:24:39.985 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:39.985 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:39.985 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:39.985 { 00:24:39.985 "params": { 00:24:39.985 "name": "Nvme$subsystem", 00:24:39.985 "trtype": "$TEST_TRANSPORT", 00:24:39.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.985 "adrfam": "ipv4", 00:24:39.985 "trsvcid": "$NVMF_PORT", 00:24:39.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.985 "hdgst": ${hdgst:-false}, 00:24:39.985 "ddgst": ${ddgst:-false} 00:24:39.985 }, 00:24:39.985 "method": "bdev_nvme_attach_controller" 00:24:39.985 } 00:24:39.985 EOF 00:24:39.985 )") 00:24:39.985 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:39.985 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:39.985 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:39.985 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:39.985 "params": { 00:24:39.985 "name": "Nvme1", 00:24:39.985 "trtype": "tcp", 00:24:39.985 "traddr": "10.0.0.2", 00:24:39.985 "adrfam": "ipv4", 00:24:39.985 "trsvcid": "4420", 00:24:39.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:39.985 "hdgst": false, 00:24:39.985 "ddgst": false 00:24:39.985 }, 00:24:39.985 "method": "bdev_nvme_attach_controller" 00:24:39.985 },{ 00:24:39.985 "params": { 00:24:39.985 "name": "Nvme2", 00:24:39.985 "trtype": "tcp", 00:24:39.985 "traddr": "10.0.0.2", 00:24:39.985 "adrfam": "ipv4", 00:24:39.985 "trsvcid": "4420", 00:24:39.985 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:39.985 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:39.985 "hdgst": false, 00:24:39.985 "ddgst": false 00:24:39.985 }, 00:24:39.985 "method": "bdev_nvme_attach_controller" 00:24:39.985 },{ 00:24:39.985 "params": { 00:24:39.985 "name": "Nvme3", 00:24:39.985 "trtype": "tcp", 00:24:39.985 "traddr": "10.0.0.2", 00:24:39.985 "adrfam": "ipv4", 00:24:39.985 "trsvcid": "4420", 00:24:39.985 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:39.985 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:39.985 "hdgst": false, 00:24:39.985 "ddgst": false 00:24:39.985 }, 00:24:39.985 "method": "bdev_nvme_attach_controller" 00:24:39.985 },{ 00:24:39.985 "params": { 00:24:39.985 "name": "Nvme4", 00:24:39.985 "trtype": "tcp", 00:24:39.985 "traddr": "10.0.0.2", 00:24:39.985 "adrfam": "ipv4", 00:24:39.985 "trsvcid": "4420", 00:24:39.985 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:39.985 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:39.985 "hdgst": false, 00:24:39.985 "ddgst": false 00:24:39.985 }, 00:24:39.985 "method": "bdev_nvme_attach_controller" 00:24:39.985 },{ 00:24:39.985 "params": { 00:24:39.985 "name": "Nvme5", 00:24:39.985 "trtype": "tcp", 00:24:39.985 "traddr": "10.0.0.2", 00:24:39.985 "adrfam": "ipv4", 00:24:39.985 "trsvcid": "4420", 00:24:39.985 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:39.985 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:39.985 "hdgst": false, 00:24:39.985 "ddgst": false 00:24:39.985 }, 00:24:39.985 "method": "bdev_nvme_attach_controller" 00:24:39.985 },{ 00:24:39.985 "params": { 00:24:39.985 "name": "Nvme6", 00:24:39.985 "trtype": "tcp", 00:24:39.985 "traddr": "10.0.0.2", 00:24:39.985 "adrfam": "ipv4", 00:24:39.985 "trsvcid": "4420", 00:24:39.985 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:39.985 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:39.985 "hdgst": false, 00:24:39.985 "ddgst": false 00:24:39.985 }, 00:24:39.985 "method": "bdev_nvme_attach_controller" 00:24:39.985 },{ 00:24:39.985 "params": { 00:24:39.985 "name": "Nvme7", 00:24:39.985 "trtype": "tcp", 00:24:39.985 "traddr": "10.0.0.2", 00:24:39.985 "adrfam": "ipv4", 00:24:39.985 "trsvcid": "4420", 00:24:39.985 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:39.985 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:39.985 "hdgst": false, 00:24:39.985 "ddgst": false 00:24:39.985 }, 00:24:39.985 "method": "bdev_nvme_attach_controller" 00:24:39.985 },{ 00:24:39.985 "params": { 00:24:39.985 "name": "Nvme8", 00:24:39.985 "trtype": "tcp", 00:24:39.985 "traddr": "10.0.0.2", 00:24:39.985 "adrfam": "ipv4", 00:24:39.985 "trsvcid": "4420", 00:24:39.985 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:39.985 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:39.985 "hdgst": false, 00:24:39.985 "ddgst": false 00:24:39.985 }, 00:24:39.985 "method": "bdev_nvme_attach_controller" 00:24:39.985 },{ 00:24:39.985 "params": { 00:24:39.985 "name": "Nvme9", 00:24:39.985 "trtype": "tcp", 00:24:39.985 "traddr": "10.0.0.2", 00:24:39.985 "adrfam": "ipv4", 00:24:39.985 "trsvcid": "4420", 00:24:39.985 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:39.985 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:39.985 "hdgst": false, 00:24:39.985 "ddgst": false 00:24:39.985 }, 00:24:39.985 "method": "bdev_nvme_attach_controller" 00:24:39.985 },{ 00:24:39.985 "params": { 00:24:39.985 "name": "Nvme10", 00:24:39.985 "trtype": "tcp", 00:24:39.985 "traddr": "10.0.0.2", 00:24:39.985 "adrfam": "ipv4", 00:24:39.985 "trsvcid": "4420", 00:24:39.985 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:39.985 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:39.985 "hdgst": false, 00:24:39.986 "ddgst": false 00:24:39.986 }, 00:24:39.986 "method": "bdev_nvme_attach_controller" 00:24:39.986 }' 00:24:40.246 [2024-11-19 09:42:26.778012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.246 [2024-11-19 09:42:26.813525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.631 Running I/O for 1 seconds... 00:24:42.580 1809.00 IOPS, 113.06 MiB/s 00:24:42.580 Latency(us) 00:24:42.580 [2024-11-19T08:42:29.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.580 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.580 Verification LBA range: start 0x0 length 0x400 00:24:42.580 Nvme1n1 : 1.11 234.86 14.68 0.00 0.00 268653.28 4751.36 228939.09 00:24:42.580 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.580 Verification LBA range: start 0x0 length 0x400 00:24:42.580 Nvme2n1 : 1.08 237.67 14.85 0.00 0.00 261585.92 19114.67 230686.72 00:24:42.580 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.580 Verification LBA range: start 0x0 length 0x400 00:24:42.580 Nvme3n1 : 1.07 239.68 14.98 0.00 0.00 254693.33 16274.77 263891.63 00:24:42.580 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.580 Verification LBA range: start 0x0 length 0x400 00:24:42.580 Nvme4n1 : 1.11 234.27 14.64 0.00 0.00 255384.13 4096.00 251658.24 00:24:42.580 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.580 Verification LBA range: start 0x0 length 0x400 00:24:42.580 Nvme5n1 : 1.11 229.60 14.35 0.00 0.00 256928.21 23374.51 246415.36 00:24:42.580 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.580 Verification LBA range: start 0x0 length 0x400 00:24:42.581 Nvme6n1 : 1.14 223.72 13.98 0.00 0.00 258659.84 15510.19 260396.37 00:24:42.581 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.581 Verification LBA range: start 0x0 length 0x400 00:24:42.581 Nvme7n1 : 1.18 271.82 16.99 0.00 0.00 210273.71 12943.36 253405.87 00:24:42.581 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.581 Verification LBA range: start 0x0 length 0x400 00:24:42.581 Nvme8n1 : 1.19 267.82 16.74 0.00 0.00 209574.20 14417.92 248162.99 00:24:42.581 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.581 Verification LBA range: start 0x0 length 0x400 00:24:42.581 Nvme9n1 : 1.19 268.34 16.77 0.00 0.00 205893.12 12178.77 260396.37 00:24:42.581 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:42.581 Verification LBA range: start 0x0 length 0x400 00:24:42.581 Nvme10n1 : 1.17 218.45 13.65 0.00 0.00 247680.21 17476.27 272629.76 00:24:42.581 [2024-11-19T08:42:29.329Z] =================================================================================================================== 00:24:42.581 [2024-11-19T08:42:29.329Z] Total : 2426.24 151.64 0.00 0.00 240620.69 4096.00 272629.76 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:42.854 rmmod nvme_tcp 00:24:42.854 rmmod nvme_fabrics 00:24:42.854 rmmod nvme_keyring 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 401291 ']' 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 401291 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 401291 ']' 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 401291 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 401291 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 401291' 00:24:42.854 killing process with pid 401291 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 401291 00:24:42.854 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 401291 00:24:43.115 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:43.115 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:43.115 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:43.115 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:24:43.115 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:24:43.115 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:43.115 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:24:43.115 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.115 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:43.115 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.115 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.115 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.030 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:45.291 00:24:45.291 real 0m16.826s 00:24:45.291 user 0m34.042s 00:24:45.291 sys 0m6.858s 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:45.292 ************************************ 00:24:45.292 END TEST nvmf_shutdown_tc1 00:24:45.292 ************************************ 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:45.292 ************************************ 00:24:45.292 START TEST nvmf_shutdown_tc2 00:24:45.292 ************************************ 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:45.292 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:45.292 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:45.292 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:45.292 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:45.292 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:45.293 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:45.293 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:45.293 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:45.293 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.293 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.293 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.293 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:45.293 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.293 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.293 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:45.293 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:45.293 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.293 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.293 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:45.293 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:45.293 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.293 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.293 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:45.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:24:45.555 00:24:45.555 --- 10.0.0.2 ping statistics --- 00:24:45.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.555 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:24:45.555 00:24:45.555 --- 10.0.0.1 ping statistics --- 00:24:45.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.555 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=403484 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 403484 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 403484 ']' 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.555 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:45.555 [2024-11-19 09:42:32.287252] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:45.555 [2024-11-19 09:42:32.287314] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.816 [2024-11-19 09:42:32.381677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:45.816 [2024-11-19 09:42:32.420775] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.816 [2024-11-19 09:42:32.420813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.816 [2024-11-19 09:42:32.420819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.816 [2024-11-19 09:42:32.420824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.816 [2024-11-19 09:42:32.420828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.816 [2024-11-19 09:42:32.422546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.816 [2024-11-19 09:42:32.422703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:45.816 [2024-11-19 09:42:32.422858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.816 [2024-11-19 09:42:32.422860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:46.388 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:46.388 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:46.388 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:46.388 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:46.388 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:46.650 [2024-11-19 09:42:33.146844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.650 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:46.650 Malloc1 00:24:46.650 [2024-11-19 09:42:33.260610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.650 Malloc2 00:24:46.650 Malloc3 00:24:46.650 Malloc4 00:24:46.650 Malloc5 00:24:46.911 Malloc6 00:24:46.912 Malloc7 00:24:46.912 Malloc8 00:24:46.912 Malloc9 00:24:46.912 Malloc10 00:24:46.912 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.912 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:46.912 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:46.912 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=403754 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 403754 /var/tmp/bdevperf.sock 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 403754 ']' 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:47.174 { 00:24:47.174 "params": { 00:24:47.174 "name": "Nvme$subsystem", 00:24:47.174 "trtype": "$TEST_TRANSPORT", 00:24:47.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.174 "adrfam": "ipv4", 00:24:47.174 "trsvcid": "$NVMF_PORT", 00:24:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.174 "hdgst": ${hdgst:-false}, 00:24:47.174 "ddgst": ${ddgst:-false} 00:24:47.174 }, 00:24:47.174 "method": "bdev_nvme_attach_controller" 00:24:47.174 } 00:24:47.174 EOF 00:24:47.174 )") 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:47.174 { 00:24:47.174 "params": { 00:24:47.174 "name": "Nvme$subsystem", 00:24:47.174 "trtype": "$TEST_TRANSPORT", 00:24:47.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.174 "adrfam": "ipv4", 00:24:47.174 "trsvcid": "$NVMF_PORT", 00:24:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.174 "hdgst": ${hdgst:-false}, 00:24:47.174 "ddgst": ${ddgst:-false} 00:24:47.174 }, 00:24:47.174 "method": "bdev_nvme_attach_controller" 00:24:47.174 } 00:24:47.174 EOF 00:24:47.174 )") 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:47.174 { 00:24:47.174 "params": { 00:24:47.174 "name": "Nvme$subsystem", 00:24:47.174 "trtype": "$TEST_TRANSPORT", 00:24:47.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.174 "adrfam": "ipv4", 00:24:47.174 "trsvcid": "$NVMF_PORT", 00:24:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.174 "hdgst": ${hdgst:-false}, 00:24:47.174 "ddgst": ${ddgst:-false} 00:24:47.174 }, 00:24:47.174 "method": "bdev_nvme_attach_controller" 00:24:47.174 } 00:24:47.174 EOF 00:24:47.174 )") 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:47.174 { 00:24:47.174 "params": { 00:24:47.174 "name": "Nvme$subsystem", 00:24:47.174 "trtype": "$TEST_TRANSPORT", 00:24:47.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.174 "adrfam": "ipv4", 00:24:47.174 "trsvcid": "$NVMF_PORT", 00:24:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.174 "hdgst": ${hdgst:-false}, 00:24:47.174 "ddgst": ${ddgst:-false} 00:24:47.174 }, 00:24:47.174 "method": "bdev_nvme_attach_controller" 00:24:47.174 } 00:24:47.174 EOF 00:24:47.174 )") 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:47.174 { 00:24:47.174 "params": { 00:24:47.174 "name": "Nvme$subsystem", 00:24:47.174 "trtype": "$TEST_TRANSPORT", 00:24:47.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.174 "adrfam": "ipv4", 00:24:47.174 "trsvcid": "$NVMF_PORT", 00:24:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.174 "hdgst": ${hdgst:-false}, 00:24:47.174 "ddgst": ${ddgst:-false} 00:24:47.174 }, 00:24:47.174 "method": "bdev_nvme_attach_controller" 00:24:47.174 } 00:24:47.174 EOF 00:24:47.174 )") 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:47.174 { 00:24:47.174 "params": { 00:24:47.174 "name": "Nvme$subsystem", 00:24:47.174 "trtype": "$TEST_TRANSPORT", 00:24:47.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.174 "adrfam": "ipv4", 00:24:47.174 "trsvcid": "$NVMF_PORT", 00:24:47.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.174 "hdgst": ${hdgst:-false}, 00:24:47.174 "ddgst": ${ddgst:-false} 00:24:47.174 }, 00:24:47.174 "method": "bdev_nvme_attach_controller" 00:24:47.174 } 00:24:47.174 EOF 00:24:47.174 )") 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:47.174 [2024-11-19 09:42:33.707913] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:47.174 [2024-11-19 09:42:33.707965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid403754 ] 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:47.174 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:47.174 { 00:24:47.174 "params": { 00:24:47.174 "name": "Nvme$subsystem", 00:24:47.175 "trtype": "$TEST_TRANSPORT", 00:24:47.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.175 "adrfam": "ipv4", 00:24:47.175 "trsvcid": "$NVMF_PORT", 00:24:47.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.175 "hdgst": ${hdgst:-false}, 00:24:47.175 "ddgst": ${ddgst:-false} 00:24:47.175 }, 00:24:47.175 "method": "bdev_nvme_attach_controller" 00:24:47.175 } 00:24:47.175 EOF 00:24:47.175 )") 00:24:47.175 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:47.175 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:47.175 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:47.175 { 00:24:47.175 "params": { 00:24:47.175 "name": "Nvme$subsystem", 00:24:47.175 "trtype": "$TEST_TRANSPORT", 00:24:47.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.175 "adrfam": "ipv4", 00:24:47.175 "trsvcid": "$NVMF_PORT", 00:24:47.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.175 "hdgst": ${hdgst:-false}, 00:24:47.175 "ddgst": ${ddgst:-false} 00:24:47.175 }, 00:24:47.175 "method": "bdev_nvme_attach_controller" 00:24:47.175 } 00:24:47.175 EOF 00:24:47.175 )") 00:24:47.175 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:47.175 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:47.175 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:47.175 { 00:24:47.175 "params": { 00:24:47.175 "name": "Nvme$subsystem", 00:24:47.175 "trtype": "$TEST_TRANSPORT", 00:24:47.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.175 "adrfam": "ipv4", 00:24:47.175 "trsvcid": "$NVMF_PORT", 00:24:47.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.175 "hdgst": ${hdgst:-false}, 00:24:47.175 "ddgst": ${ddgst:-false} 00:24:47.175 }, 00:24:47.175 "method": "bdev_nvme_attach_controller" 00:24:47.175 } 00:24:47.175 EOF 00:24:47.175 )") 00:24:47.175 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:47.175 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:47.175 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:47.175 { 00:24:47.175 "params": { 00:24:47.175 "name": "Nvme$subsystem", 00:24:47.175 "trtype": "$TEST_TRANSPORT", 00:24:47.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.175 "adrfam": "ipv4", 00:24:47.175 "trsvcid": "$NVMF_PORT", 00:24:47.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.175 "hdgst": ${hdgst:-false}, 00:24:47.175 "ddgst": ${ddgst:-false} 00:24:47.175 }, 00:24:47.175 "method": "bdev_nvme_attach_controller" 00:24:47.175 } 00:24:47.175 EOF 00:24:47.175 )") 00:24:47.175 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:47.175 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:24:47.175 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:24:47.175 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:47.175 "params": { 00:24:47.175 "name": "Nvme1", 00:24:47.175 "trtype": "tcp", 00:24:47.175 "traddr": "10.0.0.2", 00:24:47.175 "adrfam": "ipv4", 00:24:47.175 "trsvcid": "4420", 00:24:47.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:47.175 "hdgst": false, 00:24:47.175 "ddgst": false 00:24:47.175 }, 00:24:47.175 "method": "bdev_nvme_attach_controller" 00:24:47.175 },{ 00:24:47.175 "params": { 00:24:47.175 "name": "Nvme2", 00:24:47.175 "trtype": "tcp", 00:24:47.175 "traddr": "10.0.0.2", 00:24:47.175 "adrfam": "ipv4", 00:24:47.175 "trsvcid": "4420", 00:24:47.175 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:47.175 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:47.175 "hdgst": false, 00:24:47.175 "ddgst": false 00:24:47.175 }, 00:24:47.175 "method": "bdev_nvme_attach_controller" 00:24:47.175 },{ 00:24:47.175 "params": { 00:24:47.175 "name": "Nvme3", 00:24:47.175 "trtype": "tcp", 00:24:47.175 "traddr": "10.0.0.2", 00:24:47.175 "adrfam": "ipv4", 00:24:47.175 "trsvcid": "4420", 00:24:47.175 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:47.175 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:47.175 "hdgst": false, 00:24:47.175 "ddgst": false 00:24:47.175 }, 00:24:47.175 "method": "bdev_nvme_attach_controller" 00:24:47.175 },{ 00:24:47.175 "params": { 00:24:47.175 "name": "Nvme4", 00:24:47.175 "trtype": "tcp", 00:24:47.175 "traddr": "10.0.0.2", 00:24:47.175 "adrfam": "ipv4", 00:24:47.175 "trsvcid": "4420", 00:24:47.175 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:47.175 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:47.175 "hdgst": false, 00:24:47.175 "ddgst": false 00:24:47.175 }, 00:24:47.175 "method": "bdev_nvme_attach_controller" 00:24:47.175 },{ 00:24:47.175 "params": { 00:24:47.175 "name": "Nvme5", 00:24:47.175 "trtype": "tcp", 00:24:47.175 "traddr": "10.0.0.2", 00:24:47.175 "adrfam": "ipv4", 00:24:47.175 "trsvcid": "4420", 00:24:47.175 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:47.175 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:47.175 "hdgst": false, 00:24:47.175 "ddgst": false 00:24:47.175 }, 00:24:47.175 "method": "bdev_nvme_attach_controller" 00:24:47.175 },{ 00:24:47.175 "params": { 00:24:47.175 "name": "Nvme6", 00:24:47.175 "trtype": "tcp", 00:24:47.175 "traddr": "10.0.0.2", 00:24:47.175 "adrfam": "ipv4", 00:24:47.175 "trsvcid": "4420", 00:24:47.175 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:47.175 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:47.175 "hdgst": false, 00:24:47.175 "ddgst": false 00:24:47.175 }, 00:24:47.175 "method": "bdev_nvme_attach_controller" 00:24:47.175 },{ 00:24:47.175 "params": { 00:24:47.175 "name": "Nvme7", 00:24:47.175 "trtype": "tcp", 00:24:47.175 "traddr": "10.0.0.2", 00:24:47.175 "adrfam": "ipv4", 00:24:47.175 "trsvcid": "4420", 00:24:47.175 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:47.175 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:47.175 "hdgst": false, 00:24:47.175 "ddgst": false 00:24:47.175 }, 00:24:47.175 "method": "bdev_nvme_attach_controller" 00:24:47.175 },{ 00:24:47.175 "params": { 00:24:47.175 "name": "Nvme8", 00:24:47.175 "trtype": "tcp", 00:24:47.175 "traddr": "10.0.0.2", 00:24:47.175 "adrfam": "ipv4", 00:24:47.175 "trsvcid": "4420", 00:24:47.175 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:47.175 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:47.175 "hdgst": false, 00:24:47.175 "ddgst": false 00:24:47.175 }, 00:24:47.175 "method": "bdev_nvme_attach_controller" 00:24:47.175 },{ 00:24:47.175 "params": { 00:24:47.175 "name": "Nvme9", 00:24:47.175 "trtype": "tcp", 00:24:47.175 "traddr": "10.0.0.2", 00:24:47.175 "adrfam": "ipv4", 00:24:47.175 "trsvcid": "4420", 00:24:47.175 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:47.175 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:47.175 "hdgst": false, 00:24:47.175 "ddgst": false 00:24:47.175 }, 00:24:47.175 "method": "bdev_nvme_attach_controller" 00:24:47.175 },{ 00:24:47.175 "params": { 00:24:47.175 "name": "Nvme10", 00:24:47.175 "trtype": "tcp", 00:24:47.175 "traddr": "10.0.0.2", 00:24:47.175 "adrfam": "ipv4", 00:24:47.175 "trsvcid": "4420", 00:24:47.175 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:47.175 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:47.175 "hdgst": false, 00:24:47.175 "ddgst": false 00:24:47.175 }, 00:24:47.175 "method": "bdev_nvme_attach_controller" 00:24:47.175 }' 00:24:47.175 [2024-11-19 09:42:33.796240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.175 [2024-11-19 09:42:33.832458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.088 Running I/O for 10 seconds... 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:24:49.088 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:49.349 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:49.349 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:49.349 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:49.349 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:49.349 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.349 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:49.349 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.349 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:24:49.349 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:24:49.349 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=135 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 403754 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 403754 ']' 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 403754 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 403754 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:49.609 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 403754' 00:24:49.610 killing process with pid 403754 00:24:49.610 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 403754 00:24:49.610 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 403754 00:24:49.610 Received shutdown signal, test time was about 0.996872 seconds 00:24:49.610 00:24:49.610 Latency(us) 00:24:49.610 [2024-11-19T08:42:36.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.610 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.610 Verification LBA range: start 0x0 length 0x400 00:24:49.610 Nvme1n1 : 0.99 264.00 16.50 0.00 0.00 239483.75 638.29 251658.24 00:24:49.610 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.610 Verification LBA range: start 0x0 length 0x400 00:24:49.610 Nvme2n1 : 1.00 257.10 16.07 0.00 0.00 241155.41 15182.51 248162.99 00:24:49.610 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.610 Verification LBA range: start 0x0 length 0x400 00:24:49.610 Nvme3n1 : 0.98 260.38 16.27 0.00 0.00 232872.53 15073.28 230686.72 00:24:49.610 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.610 Verification LBA range: start 0x0 length 0x400 00:24:49.610 Nvme4n1 : 0.97 264.13 16.51 0.00 0.00 224981.12 18896.21 241172.48 00:24:49.610 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.610 Verification LBA range: start 0x0 length 0x400 00:24:49.610 Nvme5n1 : 0.96 199.78 12.49 0.00 0.00 290628.27 14417.92 241172.48 00:24:49.610 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.610 Verification LBA range: start 0x0 length 0x400 00:24:49.610 Nvme6n1 : 0.98 200.05 12.50 0.00 0.00 283528.67 1870.51 279620.27 00:24:49.610 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.610 Verification LBA range: start 0x0 length 0x400 00:24:49.610 Nvme7n1 : 0.99 257.59 16.10 0.00 0.00 216513.92 26214.40 244667.73 00:24:49.610 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.610 Verification LBA range: start 0x0 length 0x400 00:24:49.610 Nvme8n1 : 0.98 260.05 16.25 0.00 0.00 209120.43 19879.25 248162.99 00:24:49.610 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.610 Verification LBA range: start 0x0 length 0x400 00:24:49.610 Nvme9n1 : 0.98 262.10 16.38 0.00 0.00 202687.79 15510.19 239424.85 00:24:49.610 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:49.610 Verification LBA range: start 0x0 length 0x400 00:24:49.610 Nvme10n1 : 0.97 197.69 12.36 0.00 0.00 261376.57 19223.89 270882.13 00:24:49.610 [2024-11-19T08:42:36.358Z] =================================================================================================================== 00:24:49.610 [2024-11-19T08:42:36.358Z] Total : 2422.87 151.43 0.00 0.00 237214.39 638.29 279620.27 00:24:49.870 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:24:50.816 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 403484 00:24:50.816 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:24:50.816 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:50.816 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:50.816 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:50.816 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:50.816 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:50.816 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:24:50.816 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:50.816 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:24:50.816 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.816 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:50.816 rmmod nvme_tcp 00:24:50.816 rmmod nvme_fabrics 00:24:50.816 rmmod nvme_keyring 00:24:50.816 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.816 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:24:50.816 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:24:50.816 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 403484 ']' 00:24:50.817 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 403484 00:24:50.817 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 403484 ']' 00:24:50.817 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 403484 00:24:50.817 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:24:50.817 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.817 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 403484 00:24:51.077 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:51.077 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:51.077 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 403484' 00:24:51.077 killing process with pid 403484 00:24:51.077 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 403484 00:24:51.077 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 403484 00:24:51.338 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:51.338 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:51.338 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:51.338 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:24:51.338 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:24:51.338 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:51.338 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:24:51.338 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:51.338 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:51.338 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.338 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.338 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.249 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:53.249 00:24:53.249 real 0m8.053s 00:24:53.249 user 0m24.676s 00:24:53.249 sys 0m1.279s 00:24:53.249 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:53.249 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:53.249 ************************************ 00:24:53.249 END TEST nvmf_shutdown_tc2 00:24:53.249 ************************************ 00:24:53.249 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:53.249 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:53.249 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:53.249 09:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:53.511 ************************************ 00:24:53.511 START TEST nvmf_shutdown_tc3 00:24:53.511 ************************************ 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:53.511 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:53.512 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:53.512 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:53.512 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:53.512 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:53.512 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:53.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.740 ms 00:24:53.775 00:24:53.775 --- 10.0.0.2 ping statistics --- 00:24:53.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.775 rtt min/avg/max/mdev = 0.740/0.740/0.740/0.000 ms 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:24:53.775 00:24:53.775 --- 10.0.0.1 ping statistics --- 00:24:53.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.775 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=405072 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 405072 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 405072 ']' 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:53.775 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:53.775 [2024-11-19 09:42:40.443520] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:53.775 [2024-11-19 09:42:40.443589] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.037 [2024-11-19 09:42:40.536803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:54.038 [2024-11-19 09:42:40.570056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.038 [2024-11-19 09:42:40.570084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.038 [2024-11-19 09:42:40.570090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.038 [2024-11-19 09:42:40.570095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.038 [2024-11-19 09:42:40.570100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.038 [2024-11-19 09:42:40.571562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.038 [2024-11-19 09:42:40.571613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:54.038 [2024-11-19 09:42:40.571725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.038 [2024-11-19 09:42:40.571727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:54.609 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.609 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:54.609 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:54.609 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:54.609 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:54.609 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.609 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:54.609 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.609 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:54.609 [2024-11-19 09:42:41.298089] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.609 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:54.610 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:54.872 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:54.872 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:54.872 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:54.872 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.872 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:54.872 Malloc1 00:24:54.872 [2024-11-19 09:42:41.404535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.872 Malloc2 00:24:54.872 Malloc3 00:24:54.872 Malloc4 00:24:54.872 Malloc5 00:24:54.872 Malloc6 00:24:54.872 Malloc7 00:24:55.134 Malloc8 00:24:55.134 Malloc9 00:24:55.134 Malloc10 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=405402 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 405402 /var/tmp/bdevperf.sock 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 405402 ']' 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:55.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.134 { 00:24:55.134 "params": { 00:24:55.134 "name": "Nvme$subsystem", 00:24:55.134 "trtype": "$TEST_TRANSPORT", 00:24:55.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.134 "adrfam": "ipv4", 00:24:55.134 "trsvcid": "$NVMF_PORT", 00:24:55.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.134 "hdgst": ${hdgst:-false}, 00:24:55.134 "ddgst": ${ddgst:-false} 00:24:55.134 }, 00:24:55.134 "method": "bdev_nvme_attach_controller" 00:24:55.134 } 00:24:55.134 EOF 00:24:55.134 )") 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.134 { 00:24:55.134 "params": { 00:24:55.134 "name": "Nvme$subsystem", 00:24:55.134 "trtype": "$TEST_TRANSPORT", 00:24:55.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.134 "adrfam": "ipv4", 00:24:55.134 "trsvcid": "$NVMF_PORT", 00:24:55.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.134 "hdgst": ${hdgst:-false}, 00:24:55.134 "ddgst": ${ddgst:-false} 00:24:55.134 }, 00:24:55.134 "method": "bdev_nvme_attach_controller" 00:24:55.134 } 00:24:55.134 EOF 00:24:55.134 )") 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.134 { 00:24:55.134 "params": { 00:24:55.134 "name": "Nvme$subsystem", 00:24:55.134 "trtype": "$TEST_TRANSPORT", 00:24:55.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.134 "adrfam": "ipv4", 00:24:55.134 "trsvcid": "$NVMF_PORT", 00:24:55.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.134 "hdgst": ${hdgst:-false}, 00:24:55.134 "ddgst": ${ddgst:-false} 00:24:55.134 }, 00:24:55.134 "method": "bdev_nvme_attach_controller" 00:24:55.134 } 00:24:55.134 EOF 00:24:55.134 )") 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.134 { 00:24:55.134 "params": { 00:24:55.134 "name": "Nvme$subsystem", 00:24:55.134 "trtype": "$TEST_TRANSPORT", 00:24:55.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.134 "adrfam": "ipv4", 00:24:55.134 "trsvcid": "$NVMF_PORT", 00:24:55.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.134 "hdgst": ${hdgst:-false}, 00:24:55.134 "ddgst": ${ddgst:-false} 00:24:55.134 }, 00:24:55.134 "method": "bdev_nvme_attach_controller" 00:24:55.134 } 00:24:55.134 EOF 00:24:55.134 )") 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.134 { 00:24:55.134 "params": { 00:24:55.134 "name": "Nvme$subsystem", 00:24:55.134 "trtype": "$TEST_TRANSPORT", 00:24:55.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.134 "adrfam": "ipv4", 00:24:55.134 "trsvcid": "$NVMF_PORT", 00:24:55.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.134 "hdgst": ${hdgst:-false}, 00:24:55.134 "ddgst": ${ddgst:-false} 00:24:55.134 }, 00:24:55.134 "method": "bdev_nvme_attach_controller" 00:24:55.134 } 00:24:55.134 EOF 00:24:55.134 )") 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.134 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.134 { 00:24:55.134 "params": { 00:24:55.134 "name": "Nvme$subsystem", 00:24:55.134 "trtype": "$TEST_TRANSPORT", 00:24:55.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.134 "adrfam": "ipv4", 00:24:55.135 "trsvcid": "$NVMF_PORT", 00:24:55.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.135 "hdgst": ${hdgst:-false}, 00:24:55.135 "ddgst": ${ddgst:-false} 00:24:55.135 }, 00:24:55.135 "method": "bdev_nvme_attach_controller" 00:24:55.135 } 00:24:55.135 EOF 00:24:55.135 )") 00:24:55.135 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:55.135 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.135 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.135 { 00:24:55.135 "params": { 00:24:55.135 "name": "Nvme$subsystem", 00:24:55.135 "trtype": "$TEST_TRANSPORT", 00:24:55.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.135 "adrfam": "ipv4", 00:24:55.135 "trsvcid": "$NVMF_PORT", 00:24:55.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.135 "hdgst": ${hdgst:-false}, 00:24:55.135 "ddgst": ${ddgst:-false} 00:24:55.135 }, 00:24:55.135 "method": "bdev_nvme_attach_controller" 00:24:55.135 } 00:24:55.135 EOF 00:24:55.135 )") 00:24:55.135 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:55.135 [2024-11-19 09:42:41.850990] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:55.135 [2024-11-19 09:42:41.851041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405402 ] 00:24:55.135 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.135 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.135 { 00:24:55.135 "params": { 00:24:55.135 "name": "Nvme$subsystem", 00:24:55.135 "trtype": "$TEST_TRANSPORT", 00:24:55.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.135 "adrfam": "ipv4", 00:24:55.135 "trsvcid": "$NVMF_PORT", 00:24:55.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.135 "hdgst": ${hdgst:-false}, 00:24:55.135 "ddgst": ${ddgst:-false} 00:24:55.135 }, 00:24:55.135 "method": "bdev_nvme_attach_controller" 00:24:55.135 } 00:24:55.135 EOF 00:24:55.135 )") 00:24:55.135 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:55.135 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.135 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.135 { 00:24:55.135 "params": { 00:24:55.135 "name": "Nvme$subsystem", 00:24:55.135 "trtype": "$TEST_TRANSPORT", 00:24:55.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.135 "adrfam": "ipv4", 00:24:55.135 "trsvcid": "$NVMF_PORT", 00:24:55.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.135 "hdgst": ${hdgst:-false}, 00:24:55.135 "ddgst": ${ddgst:-false} 00:24:55.135 }, 00:24:55.135 "method": "bdev_nvme_attach_controller" 00:24:55.135 } 00:24:55.135 EOF 00:24:55.135 )") 00:24:55.135 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:55.135 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:55.135 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:55.135 { 00:24:55.135 "params": { 00:24:55.135 "name": "Nvme$subsystem", 00:24:55.135 "trtype": "$TEST_TRANSPORT", 00:24:55.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.135 "adrfam": "ipv4", 00:24:55.135 "trsvcid": "$NVMF_PORT", 00:24:55.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.135 "hdgst": ${hdgst:-false}, 00:24:55.135 "ddgst": ${ddgst:-false} 00:24:55.135 }, 00:24:55.135 "method": "bdev_nvme_attach_controller" 00:24:55.135 } 00:24:55.135 EOF 00:24:55.135 )") 00:24:55.135 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:55.135 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:24:55.397 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:24:55.397 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:55.397 "params": { 00:24:55.397 "name": "Nvme1", 00:24:55.397 "trtype": "tcp", 00:24:55.397 "traddr": "10.0.0.2", 00:24:55.397 "adrfam": "ipv4", 00:24:55.397 "trsvcid": "4420", 00:24:55.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:55.397 "hdgst": false, 00:24:55.397 "ddgst": false 00:24:55.397 }, 00:24:55.397 "method": "bdev_nvme_attach_controller" 00:24:55.397 },{ 00:24:55.397 "params": { 00:24:55.397 "name": "Nvme2", 00:24:55.397 "trtype": "tcp", 00:24:55.397 "traddr": "10.0.0.2", 00:24:55.397 "adrfam": "ipv4", 00:24:55.397 "trsvcid": "4420", 00:24:55.397 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:55.397 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:55.397 "hdgst": false, 00:24:55.397 "ddgst": false 00:24:55.397 }, 00:24:55.397 "method": "bdev_nvme_attach_controller" 00:24:55.398 },{ 00:24:55.398 "params": { 00:24:55.398 "name": "Nvme3", 00:24:55.398 "trtype": "tcp", 00:24:55.398 "traddr": "10.0.0.2", 00:24:55.398 "adrfam": "ipv4", 00:24:55.398 "trsvcid": "4420", 00:24:55.398 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:55.398 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:55.398 "hdgst": false, 00:24:55.398 "ddgst": false 00:24:55.398 }, 00:24:55.398 "method": "bdev_nvme_attach_controller" 00:24:55.398 },{ 00:24:55.398 "params": { 00:24:55.398 "name": "Nvme4", 00:24:55.398 "trtype": "tcp", 00:24:55.398 "traddr": "10.0.0.2", 00:24:55.398 "adrfam": "ipv4", 00:24:55.398 "trsvcid": "4420", 00:24:55.398 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:55.398 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:55.398 "hdgst": false, 00:24:55.398 "ddgst": false 00:24:55.398 }, 00:24:55.398 "method": "bdev_nvme_attach_controller" 00:24:55.398 },{ 00:24:55.398 "params": { 00:24:55.398 "name": "Nvme5", 00:24:55.398 "trtype": "tcp", 00:24:55.398 "traddr": "10.0.0.2", 00:24:55.398 "adrfam": "ipv4", 00:24:55.398 "trsvcid": "4420", 00:24:55.398 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:55.398 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:55.398 "hdgst": false, 00:24:55.398 "ddgst": false 00:24:55.398 }, 00:24:55.398 "method": "bdev_nvme_attach_controller" 00:24:55.398 },{ 00:24:55.398 "params": { 00:24:55.398 "name": "Nvme6", 00:24:55.398 "trtype": "tcp", 00:24:55.398 "traddr": "10.0.0.2", 00:24:55.398 "adrfam": "ipv4", 00:24:55.398 "trsvcid": "4420", 00:24:55.398 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:55.398 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:55.398 "hdgst": false, 00:24:55.398 "ddgst": false 00:24:55.398 }, 00:24:55.398 "method": "bdev_nvme_attach_controller" 00:24:55.398 },{ 00:24:55.398 "params": { 00:24:55.398 "name": "Nvme7", 00:24:55.398 "trtype": "tcp", 00:24:55.398 "traddr": "10.0.0.2", 00:24:55.398 "adrfam": "ipv4", 00:24:55.398 "trsvcid": "4420", 00:24:55.398 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:55.398 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:55.398 "hdgst": false, 00:24:55.398 "ddgst": false 00:24:55.398 }, 00:24:55.398 "method": "bdev_nvme_attach_controller" 00:24:55.398 },{ 00:24:55.398 "params": { 00:24:55.398 "name": "Nvme8", 00:24:55.398 "trtype": "tcp", 00:24:55.398 "traddr": "10.0.0.2", 00:24:55.398 "adrfam": "ipv4", 00:24:55.398 "trsvcid": "4420", 00:24:55.398 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:55.398 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:55.398 "hdgst": false, 00:24:55.398 "ddgst": false 00:24:55.398 }, 00:24:55.398 "method": "bdev_nvme_attach_controller" 00:24:55.398 },{ 00:24:55.398 "params": { 00:24:55.398 "name": "Nvme9", 00:24:55.398 "trtype": "tcp", 00:24:55.398 "traddr": "10.0.0.2", 00:24:55.398 "adrfam": "ipv4", 00:24:55.398 "trsvcid": "4420", 00:24:55.398 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:55.398 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:55.398 "hdgst": false, 00:24:55.398 "ddgst": false 00:24:55.398 }, 00:24:55.398 "method": "bdev_nvme_attach_controller" 00:24:55.398 },{ 00:24:55.398 "params": { 00:24:55.398 "name": "Nvme10", 00:24:55.398 "trtype": "tcp", 00:24:55.398 "traddr": "10.0.0.2", 00:24:55.398 "adrfam": "ipv4", 00:24:55.398 "trsvcid": "4420", 00:24:55.398 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:55.398 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:55.398 "hdgst": false, 00:24:55.398 "ddgst": false 00:24:55.398 }, 00:24:55.398 "method": "bdev_nvme_attach_controller" 00:24:55.398 }' 00:24:55.398 [2024-11-19 09:42:41.942975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.398 [2024-11-19 09:42:41.978968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.314 Running I/O for 10 seconds... 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:24:57.314 09:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:57.314 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:57.314 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:57.314 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:57.314 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:57.314 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.314 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:57.575 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.575 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=72 00:24:57.575 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 72 -ge 100 ']' 00:24:57.575 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:57.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:57.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:57.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:57.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:57.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:57.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=136 00:24:57.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:24:57.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:24:57.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:24:57.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:24:57.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 405072 00:24:57.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 405072 ']' 00:24:57.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 405072 00:24:57.855 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:24:57.856 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:57.856 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 405072 00:24:57.856 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:57.856 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:57.856 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 405072' 00:24:57.856 killing process with pid 405072 00:24:57.856 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 405072 00:24:57.856 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 405072 00:24:57.856 [2024-11-19 09:42:44.460143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.460501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1d4e0 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.461598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.461610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.461615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.461620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.461626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.461631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.461636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.461641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.461645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.461650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.461655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.461659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.856 [2024-11-19 09:42:44.461664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.461911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef600 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.462845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.857 [2024-11-19 09:42:44.462886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.857 [2024-11-19 09:42:44.462904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.857 [2024-11-19 09:42:44.462912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.857 [2024-11-19 09:42:44.462922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.857 [2024-11-19 09:42:44.462930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.857 [2024-11-19 09:42:44.462940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.857 [2024-11-19 09:42:44.462947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.857 [2024-11-19 09:42:44.462957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.857 [2024-11-19 09:42:44.462964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.857 [2024-11-19 09:42:44.462974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.857 [2024-11-19 09:42:44.462981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.857 [2024-11-19 09:42:44.462991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.857 [2024-11-19 09:42:44.463003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.857 [2024-11-19 09:42:44.463013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.857 [2024-11-19 09:42:44.463021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.857 [2024-11-19 09:42:44.463030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.857 [2024-11-19 09:42:44.463028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with t[2024-11-19 09:42:44.463038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:24:57.857 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.857 [2024-11-19 09:42:44.463052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.857 [2024-11-19 09:42:44.463054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.463059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.857 [2024-11-19 09:42:44.463061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.463067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.463069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.857 [2024-11-19 09:42:44.463073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.463078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.857 [2024-11-19 09:42:44.463080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.463086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.857 [2024-11-19 09:42:44.463088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.857 [2024-11-19 09:42:44.463091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with t[2024-11-19 09:42:44.463096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:24:57.858 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.858 [2024-11-19 09:42:44.463103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with t[2024-11-19 09:42:44.463108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128he state(6) to be set 00:24:57.858 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-11-19 09:42:44.463116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.858 [2024-11-19 09:42:44.463121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-11-19 09:42:44.463137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.858 [2024-11-19 09:42:44.463142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-11-19 09:42:44.463153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.858 [2024-11-19 09:42:44.463170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-11-19 09:42:44.463182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.858 [2024-11-19 09:42:44.463188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-11-19 09:42:44.463198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 09:42:44.463204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.858 he state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-11-19 09:42:44.463216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.858 [2024-11-19 09:42:44.463227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-11-19 09:42:44.463239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.858 [2024-11-19 09:42:44.463245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-11-19 09:42:44.463256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 09:42:44.463262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.858 he state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-11-19 09:42:44.463275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.858 [2024-11-19 09:42:44.463287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-11-19 09:42:44.463298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.858 [2024-11-19 09:42:44.463303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-11-19 09:42:44.463314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 09:42:44.463320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.858 he state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:12[2024-11-19 09:42:44.463333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 he state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.858 [2024-11-19 09:42:44.463346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-11-19 09:42:44.463357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.858 [2024-11-19 09:42:44.463363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-11-19 09:42:44.463373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.858 [2024-11-19 09:42:44.463385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:12[2024-11-19 09:42:44.463390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 he state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.858 [2024-11-19 09:42:44.463403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.858 [2024-11-19 09:42:44.463409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.859 [2024-11-19 09:42:44.463417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 09:42:44.463419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 he state(6) to be set 00:24:57.859 [2024-11-19 09:42:44.463426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with the state(6) to be set 00:24:57.859 [2024-11-19 09:42:44.463429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:12[2024-11-19 09:42:44.463430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbefad0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 he state(6) to be set 00:24:57.859 [2024-11-19 09:42:44.463440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.463992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.463999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.464009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.464016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.464025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-11-19 09:42:44.464032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.859 [2024-11-19 09:42:44.464399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.859 [2024-11-19 09:42:44.464422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.859 [2024-11-19 09:42:44.464428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeffc0 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.860 [2024-11-19 09:42:44.464869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.860 [2024-11-19 09:42:44.464877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.860 [2024-11-19 09:42:44.464885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.860 [2024-11-19 09:42:44.464893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.860 [2024-11-19 09:42:44.464900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.860 [2024-11-19 09:42:44.464907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.860 [2024-11-19 09:42:44.464915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.860 [2024-11-19 09:42:44.464922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d810 is same with the state(6) to be set 00:24:57.860 [2024-11-19 09:42:44.464962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.861 [2024-11-19 09:42:44.464972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.861 [2024-11-19 09:42:44.464981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.861 [2024-11-19 09:42:44.464988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.861 [2024-11-19 09:42:44.464996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.861 [2024-11-19 09:42:44.465004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.861 [2024-11-19 09:42:44.465012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.861 [2024-11-19 09:42:44.465019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.861 [2024-11-19 09:42:44.465027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c420 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.861 [2024-11-19 09:42:44.465075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.861 [2024-11-19 09:42:44.465083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.861 [2024-11-19 09:42:44.465094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.861 [2024-11-19 09:42:44.465102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.861 [2024-11-19 09:42:44.465109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.861 [2024-11-19 09:42:44.465118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.861 [2024-11-19 09:42:44.465125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.861 [2024-11-19 09:42:44.465133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483d00 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.861 [2024-11-19 09:42:44.465176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.861 [2024-11-19 09:42:44.465185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.861 [2024-11-19 09:42:44.465193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.861 [2024-11-19 09:42:44.465204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.861 [2024-11-19 09:42:44.465215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.861 [2024-11-19 09:42:44.465223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.861 [2024-11-19 09:42:44.465230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.861 [2024-11-19 09:42:44.465238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10069f0 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.861 [2024-11-19 09:42:44.465270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.861 [2024-11-19 09:42:44.465278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.861 [2024-11-19 09:42:44.465285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.861 [2024-11-19 09:42:44.465293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.861 [2024-11-19 09:42:44.465301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.861 [2024-11-19 09:42:44.465309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.861 [2024-11-19 09:42:44.465316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.861 [2024-11-19 09:42:44.465323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100fcb0 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.861 [2024-11-19 09:42:44.465646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.465651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.465655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.465660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.465665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.465670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.465675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.465681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.465686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.465692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.465697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.465702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.465707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.465711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.465719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.465726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0490 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.466653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0960 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.467169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:57.862 [2024-11-19 09:42:44.467202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100fcb0 (9): Bad file descriptor 00:24:57.862 [2024-11-19 09:42:44.467522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.467538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.467546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.862 [2024-11-19 09:42:44.467551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.467695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.468414] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:57.863 [2024-11-19 09:42:44.468824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-11-19 09:42:44.468842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100fcb0 with addr=10.0.0.2, port=4420 00:24:57.863 [2024-11-19 09:42:44.468850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100fcb0 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.468895] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:57.863 [2024-11-19 09:42:44.468929] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:57.863 [2024-11-19 09:42:44.469391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100fcb0 (9): Bad file descriptor 00:24:57.863 [2024-11-19 09:42:44.469494] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:57.863 [2024-11-19 09:42:44.469623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:57.863 [2024-11-19 09:42:44.469635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:57.863 [2024-11-19 09:42:44.469645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:57.863 [2024-11-19 09:42:44.469654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:57.863 [2024-11-19 09:42:44.469697] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:57.863 [2024-11-19 09:42:44.469731] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:57.863 [2024-11-19 09:42:44.475142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100d810 (9): Bad file descriptor 00:24:57.863 [2024-11-19 09:42:44.475196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.863 [2024-11-19 09:42:44.475208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.863 [2024-11-19 09:42:44.475216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.863 [2024-11-19 09:42:44.475226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.863 [2024-11-19 09:42:44.475235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.863 [2024-11-19 09:42:44.475242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.863 [2024-11-19 09:42:44.475250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.863 [2024-11-19 09:42:44.475258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.863 [2024-11-19 09:42:44.475266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1461f20 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.475293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100c420 (9): Bad file descriptor 00:24:57.863 [2024-11-19 09:42:44.475318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.863 [2024-11-19 09:42:44.475327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.863 [2024-11-19 09:42:44.475335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.863 [2024-11-19 09:42:44.475343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.863 [2024-11-19 09:42:44.475352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.863 [2024-11-19 09:42:44.475360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.863 [2024-11-19 09:42:44.475368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.863 [2024-11-19 09:42:44.475375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.863 [2024-11-19 09:42:44.475383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143b180 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.475407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.863 [2024-11-19 09:42:44.475416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.863 [2024-11-19 09:42:44.475425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.863 [2024-11-19 09:42:44.475433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.863 [2024-11-19 09:42:44.475445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.863 [2024-11-19 09:42:44.475452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.863 [2024-11-19 09:42:44.475461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.863 [2024-11-19 09:42:44.475469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.863 [2024-11-19 09:42:44.475476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf27610 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.475492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1483d00 (9): Bad file descriptor 00:24:57.863 [2024-11-19 09:42:44.475512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10069f0 (9): Bad file descriptor 00:24:57.863 [2024-11-19 09:42:44.477899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:57.863 [2024-11-19 09:42:44.478445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-11-19 09:42:44.478483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100fcb0 with addr=10.0.0.2, port=4420 00:24:57.863 [2024-11-19 09:42:44.478495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100fcb0 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.478602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100fcb0 (9): Bad file descriptor 00:24:57.863 [2024-11-19 09:42:44.478676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:57.863 [2024-11-19 09:42:44.478685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:57.863 [2024-11-19 09:42:44.478693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:57.863 [2024-11-19 09:42:44.478702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:57.863 [2024-11-19 09:42:44.482908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.482918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.482925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.482932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.482939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.482945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.482952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.863 [2024-11-19 09:42:44.482958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.482964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.482969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.482974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.482980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.482988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.482994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.482999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0e30 is same with the state(6) to be set 00:24:57.864 [2024-11-19 09:42:44.483395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf1320 is same with t[2024-11-19 09:42:44.483760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:12he state(6) to be set 00:24:57.864 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.864 [2024-11-19 09:42:44.483789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.864 [2024-11-19 09:42:44.483800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.483807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.483817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.483825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.483834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.483842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.483851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.483859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.483868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.483876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.483885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.483893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.483907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.483914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.483927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.483939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.483950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.483958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.483967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.483976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.483986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.483993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.865 [2024-11-19 09:42:44.484497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.865 [2024-11-19 09:42:44.484505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.484514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.484522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.484531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.484538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.484548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.484556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.484565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.484575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.484583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1413fc0 is same with the state(6) to be set 00:24:57.866 [2024-11-19 09:42:44.486226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:24:57.866 [2024-11-19 09:42:44.486248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:24:57.866 [2024-11-19 09:42:44.486289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1004fa0 (9): Bad file descriptor 00:24:57.866 [2024-11-19 09:42:44.486303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1461f20 (9): Bad file descriptor 00:24:57.866 [2024-11-19 09:42:44.486324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.866 [2024-11-19 09:42:44.486333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.866 [2024-11-19 09:42:44.486349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.866 [2024-11-19 09:42:44.486365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.866 [2024-11-19 09:42:44.486381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456310 is same with the state(6) to be set 00:24:57.866 [2024-11-19 09:42:44.486427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143b180 (9): Bad file descriptor 00:24:57.866 [2024-11-19 09:42:44.486445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf27610 (9): Bad file descriptor 00:24:57.866 [2024-11-19 09:42:44.486580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.486984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.486993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.487001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.487010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.487017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.487027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.487036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.487045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.866 [2024-11-19 09:42:44.487052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.866 [2024-11-19 09:42:44.487062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.487739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.487747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1213aa0 is same with the state(6) to be set 00:24:57.867 [2024-11-19 09:42:44.489017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.489030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.489043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.867 [2024-11-19 09:42:44.489054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.867 [2024-11-19 09:42:44.489065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.868 [2024-11-19 09:42:44.489797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.868 [2024-11-19 09:42:44.489806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.489814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.489823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.489831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.489845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.489853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.489863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.489870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.489880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.489887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.489896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.489904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.489914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.489921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.489931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.489938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.489947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.489955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.489964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.489972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.489982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.489989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.490000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.490007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.490017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.490024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.490034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.490041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.490050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.490059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.494597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.494630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.494642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.494651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.494661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.494669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.494680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.494688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.494697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.494706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.494716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.494724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.494733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.494742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.494751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1214d60 is same with the state(6) to be set 00:24:57.869 [2024-11-19 09:42:44.496089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.496107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.496129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.496138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.496150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.496165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.496177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.496185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.496195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.496202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.496212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.496219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.496229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.496236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.496246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.496254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.496264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.496273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.496282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.496290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.496299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.496307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.496317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.869 [2024-11-19 09:42:44.496325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.869 [2024-11-19 09:42:44.496335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.496990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.496999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.497008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.497016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.497026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.497034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.870 [2024-11-19 09:42:44.497045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.870 [2024-11-19 09:42:44.497056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.497066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.497074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.497085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.497093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.497103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.497110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.497121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.497129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.497139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.497146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.497156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.497167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.497177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.497185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.497195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.497203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.497213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.497220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.497230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.497238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.497247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.497255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.497265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.497273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.497283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140ffa0 is same with the state(6) to be set 00:24:57.871 [2024-11-19 09:42:44.498862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.498879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.498892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.498900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.498910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.498919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.498929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.498937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.498948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.498956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.498966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.498975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.498984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.498993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.871 [2024-11-19 09:42:44.499359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.871 [2024-11-19 09:42:44.499367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.499983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.499994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.500003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.500014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.500022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.500032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.872 [2024-11-19 09:42:44.500040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.872 [2024-11-19 09:42:44.500049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250320 is same with the state(6) to be set 00:24:57.872 [2024-11-19 09:42:44.501363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:57.872 [2024-11-19 09:42:44.501382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:57.872 [2024-11-19 09:42:44.501394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:24:57.872 [2024-11-19 09:42:44.501404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:24:57.872 [2024-11-19 09:42:44.501670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-11-19 09:42:44.501688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1461f20 with addr=10.0.0.2, port=4420 00:24:57.873 [2024-11-19 09:42:44.501698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1461f20 is same with the state(6) to be set 00:24:57.873 [2024-11-19 09:42:44.501880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-11-19 09:42:44.501893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1004fa0 with addr=10.0.0.2, port=4420 00:24:57.873 [2024-11-19 09:42:44.501901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004fa0 is same with the state(6) to be set 00:24:57.873 [2024-11-19 09:42:44.501940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1456310 (9): Bad file descriptor 00:24:57.873 [2024-11-19 09:42:44.501983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1004fa0 (9): Bad file descriptor 00:24:57.873 [2024-11-19 09:42:44.501997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1461f20 (9): Bad file descriptor 00:24:57.873 [2024-11-19 09:42:44.502358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-11-19 09:42:44.502374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10069f0 with addr=10.0.0.2, port=4420 00:24:57.873 [2024-11-19 09:42:44.502382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10069f0 is same with the state(6) to be set 00:24:57.873 [2024-11-19 09:42:44.502720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-11-19 09:42:44.502731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100d810 with addr=10.0.0.2, port=4420 00:24:57.873 [2024-11-19 09:42:44.502739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d810 is same with the state(6) to be set 00:24:57.873 [2024-11-19 09:42:44.503083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-11-19 09:42:44.503095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100c420 with addr=10.0.0.2, port=4420 00:24:57.873 [2024-11-19 09:42:44.503102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c420 is same with the state(6) to be set 00:24:57.873 [2024-11-19 09:42:44.503376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-11-19 09:42:44.503388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1483d00 with addr=10.0.0.2, port=4420 00:24:57.873 [2024-11-19 09:42:44.503399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483d00 is same with the state(6) to be set 00:24:57.873 [2024-11-19 09:42:44.504220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.873 [2024-11-19 09:42:44.504671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.873 [2024-11-19 09:42:44.504681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.504690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.504700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.504709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.504718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.504727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.504736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.504744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.504754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.504762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.504772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.504781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.504791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.504799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.504809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.504817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.504827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.504835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.504845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.504853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.504863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.504870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.504881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.504889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.504901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.504909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.504919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.504928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.504938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.504946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.504956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.504964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.504974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.504982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.504992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.874 [2024-11-19 09:42:44.505391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.874 [2024-11-19 09:42:44.505400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14114d0 is same with the state(6) to be set 00:24:57.875 [2024-11-19 09:42:44.506663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.506691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.506713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.506733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.506752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.506770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.506788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.506807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.506825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.506843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.506863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.506881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.506899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.506918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.506935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.506954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.506972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.506990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.506998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.875 [2024-11-19 09:42:44.507388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.875 [2024-11-19 09:42:44.507399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.876 [2024-11-19 09:42:44.507855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.876 [2024-11-19 09:42:44.507864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1412a00 is same with the state(6) to be set 00:24:57.876 [2024-11-19 09:42:44.509674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:57.876 [2024-11-19 09:42:44.509701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:24:57.876 task offset: 31616 on job bdev=Nvme1n1 fails 00:24:57.876 00:24:57.876 Latency(us) 00:24:57.876 [2024-11-19T08:42:44.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.876 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.876 Job: Nvme1n1 ended in about 0.93 seconds with error 00:24:57.876 Verification LBA range: start 0x0 length 0x400 00:24:57.876 Nvme1n1 : 0.93 211.17 13.20 68.60 0.00 226084.87 7263.57 237677.23 00:24:57.876 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.876 Job: Nvme2n1 ended in about 0.96 seconds with error 00:24:57.876 Verification LBA range: start 0x0 length 0x400 00:24:57.876 Nvme2n1 : 0.96 133.99 8.37 67.00 0.00 308419.70 15837.87 270882.13 00:24:57.876 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.876 Job: Nvme3n1 ended in about 0.96 seconds with error 00:24:57.876 Verification LBA range: start 0x0 length 0x400 00:24:57.876 Nvme3n1 : 0.96 199.53 12.47 66.51 0.00 228212.48 17257.81 258648.75 00:24:57.876 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.876 Job: Nvme4n1 ended in about 0.96 seconds with error 00:24:57.876 Verification LBA range: start 0x0 length 0x400 00:24:57.876 Nvme4n1 : 0.96 199.01 12.44 66.34 0.00 223981.44 16165.55 225443.84 00:24:57.876 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.876 Job: Nvme5n1 ended in about 0.97 seconds with error 00:24:57.876 Verification LBA range: start 0x0 length 0x400 00:24:57.876 Nvme5n1 : 0.97 197.35 12.33 65.78 0.00 221187.63 22391.47 237677.23 00:24:57.876 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.876 Job: Nvme6n1 ended in about 0.98 seconds with error 00:24:57.876 Verification LBA range: start 0x0 length 0x400 00:24:57.876 Nvme6n1 : 0.98 143.54 8.97 65.62 0.00 272388.82 15182.51 286610.77 00:24:57.876 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.876 Job: Nvme7n1 ended in about 0.95 seconds with error 00:24:57.876 Verification LBA range: start 0x0 length 0x400 00:24:57.876 Nvme7n1 : 0.95 201.63 12.60 67.21 0.00 206407.89 19551.57 244667.73 00:24:57.876 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.876 Verification LBA range: start 0x0 length 0x400 00:24:57.876 Nvme8n1 : 0.95 269.57 16.85 0.00 0.00 200802.77 10321.92 248162.99 00:24:57.876 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.876 Verification LBA range: start 0x0 length 0x400 00:24:57.876 Nvme9n1 : 0.94 204.02 12.75 0.00 0.00 258502.54 14636.37 244667.73 00:24:57.876 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.877 Job: Nvme10n1 ended in about 0.97 seconds with error 00:24:57.877 Verification LBA range: start 0x0 length 0x400 00:24:57.877 Nvme10n1 : 0.97 132.29 8.27 66.15 0.00 261050.31 21626.88 272629.76 00:24:57.877 [2024-11-19T08:42:44.625Z] =================================================================================================================== 00:24:57.877 [2024-11-19T08:42:44.625Z] Total : 1892.11 118.26 533.20 0.00 237043.43 7263.57 286610.77 00:24:57.877 [2024-11-19 09:42:44.535778] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:57.877 [2024-11-19 09:42:44.535830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:24:57.877 [2024-11-19 09:42:44.535892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10069f0 (9): Bad file descriptor 00:24:57.877 [2024-11-19 09:42:44.535907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100d810 (9): Bad file descriptor 00:24:57.877 [2024-11-19 09:42:44.535918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100c420 (9): Bad file descriptor 00:24:57.877 [2024-11-19 09:42:44.535929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1483d00 (9): Bad file descriptor 00:24:57.877 [2024-11-19 09:42:44.535938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:24:57.877 [2024-11-19 09:42:44.535946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:24:57.877 [2024-11-19 09:42:44.535955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:24:57.877 [2024-11-19 09:42:44.535965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:24:57.877 [2024-11-19 09:42:44.535973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:24:57.877 [2024-11-19 09:42:44.535980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:24:57.877 [2024-11-19 09:42:44.535988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:24:57.877 [2024-11-19 09:42:44.535995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:24:57.877 [2024-11-19 09:42:44.536511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-11-19 09:42:44.536534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100fcb0 with addr=10.0.0.2, port=4420 00:24:57.877 [2024-11-19 09:42:44.536544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100fcb0 is same with the state(6) to be set 00:24:57.877 [2024-11-19 09:42:44.536863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-11-19 09:42:44.536875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143b180 with addr=10.0.0.2, port=4420 00:24:57.877 [2024-11-19 09:42:44.536883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143b180 is same with the state(6) to be set 00:24:57.877 [2024-11-19 09:42:44.537218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-11-19 09:42:44.537230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf27610 with addr=10.0.0.2, port=4420 00:24:57.877 [2024-11-19 09:42:44.537237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf27610 is same with the state(6) to be set 00:24:57.877 [2024-11-19 09:42:44.537245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:57.877 [2024-11-19 09:42:44.537257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:57.877 [2024-11-19 09:42:44.537264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:57.877 [2024-11-19 09:42:44.537272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:57.877 [2024-11-19 09:42:44.537280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:57.877 [2024-11-19 09:42:44.537287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:57.877 [2024-11-19 09:42:44.537294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:57.877 [2024-11-19 09:42:44.537300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:57.877 [2024-11-19 09:42:44.537308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:24:57.877 [2024-11-19 09:42:44.537315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:24:57.877 [2024-11-19 09:42:44.537322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:24:57.877 [2024-11-19 09:42:44.537329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:24:57.877 [2024-11-19 09:42:44.537337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:24:57.877 [2024-11-19 09:42:44.537344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:24:57.877 [2024-11-19 09:42:44.537351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:24:57.877 [2024-11-19 09:42:44.537358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:24:57.877 [2024-11-19 09:42:44.538039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100fcb0 (9): Bad file descriptor 00:24:57.877 [2024-11-19 09:42:44.538056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143b180 (9): Bad file descriptor 00:24:57.877 [2024-11-19 09:42:44.538066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf27610 (9): Bad file descriptor 00:24:57.877 [2024-11-19 09:42:44.538111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:24:57.877 [2024-11-19 09:42:44.538122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:24:57.877 [2024-11-19 09:42:44.538132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:24:57.877 [2024-11-19 09:42:44.538141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:24:57.877 [2024-11-19 09:42:44.538151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:24:57.877 [2024-11-19 09:42:44.538166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:57.877 [2024-11-19 09:42:44.538176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:57.877 [2024-11-19 09:42:44.538227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:57.877 [2024-11-19 09:42:44.538235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:57.877 [2024-11-19 09:42:44.538242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:57.877 [2024-11-19 09:42:44.538249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:57.877 [2024-11-19 09:42:44.538261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:24:57.877 [2024-11-19 09:42:44.538267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:24:57.877 [2024-11-19 09:42:44.538275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:24:57.877 [2024-11-19 09:42:44.538282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:24:57.877 [2024-11-19 09:42:44.538289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:24:57.877 [2024-11-19 09:42:44.538296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:24:57.877 [2024-11-19 09:42:44.538304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:24:57.877 [2024-11-19 09:42:44.538310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:24:57.877 [2024-11-19 09:42:44.538661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-11-19 09:42:44.538675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1456310 with addr=10.0.0.2, port=4420 00:24:57.877 [2024-11-19 09:42:44.538683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1456310 is same with the state(6) to be set 00:24:57.877 [2024-11-19 09:42:44.539000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-11-19 09:42:44.539012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1004fa0 with addr=10.0.0.2, port=4420 00:24:57.877 [2024-11-19 09:42:44.539019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004fa0 is same with the state(6) to be set 00:24:57.877 [2024-11-19 09:42:44.539277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-11-19 09:42:44.539289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1461f20 with addr=10.0.0.2, port=4420 00:24:57.877 [2024-11-19 09:42:44.539297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1461f20 is same with the state(6) to be set 00:24:57.877 [2024-11-19 09:42:44.539632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-11-19 09:42:44.539643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1483d00 with addr=10.0.0.2, port=4420 00:24:57.877 [2024-11-19 09:42:44.539650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483d00 is same with the state(6) to be set 00:24:57.877 [2024-11-19 09:42:44.539725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.878 [2024-11-19 09:42:44.539735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100c420 with addr=10.0.0.2, port=4420 00:24:57.878 [2024-11-19 09:42:44.539743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c420 is same with the state(6) to be set 00:24:57.878 [2024-11-19 09:42:44.540002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.878 [2024-11-19 09:42:44.540012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100d810 with addr=10.0.0.2, port=4420 00:24:57.878 [2024-11-19 09:42:44.540019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d810 is same with the state(6) to be set 00:24:57.878 [2024-11-19 09:42:44.540305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.878 [2024-11-19 09:42:44.540317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10069f0 with addr=10.0.0.2, port=4420 00:24:57.878 [2024-11-19 09:42:44.540324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10069f0 is same with the state(6) to be set 00:24:57.878 [2024-11-19 09:42:44.540360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1456310 (9): Bad file descriptor 00:24:57.878 [2024-11-19 09:42:44.540372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1004fa0 (9): Bad file descriptor 00:24:57.878 [2024-11-19 09:42:44.540382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1461f20 (9): Bad file descriptor 00:24:57.878 [2024-11-19 09:42:44.540392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1483d00 (9): Bad file descriptor 00:24:57.878 [2024-11-19 09:42:44.540402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100c420 (9): Bad file descriptor 00:24:57.878 [2024-11-19 09:42:44.540412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100d810 (9): Bad file descriptor 00:24:57.878 [2024-11-19 09:42:44.540422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10069f0 (9): Bad file descriptor 00:24:57.878 [2024-11-19 09:42:44.540455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:24:57.878 [2024-11-19 09:42:44.540464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:24:57.878 [2024-11-19 09:42:44.540472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:24:57.878 [2024-11-19 09:42:44.540479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:24:57.878 [2024-11-19 09:42:44.540487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:24:57.878 [2024-11-19 09:42:44.540494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:24:57.878 [2024-11-19 09:42:44.540501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:24:57.878 [2024-11-19 09:42:44.540508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:24:57.878 [2024-11-19 09:42:44.540515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:24:57.878 [2024-11-19 09:42:44.540522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:24:57.878 [2024-11-19 09:42:44.540529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:24:57.878 [2024-11-19 09:42:44.540536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:24:57.878 [2024-11-19 09:42:44.540546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:24:57.878 [2024-11-19 09:42:44.540553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:24:57.878 [2024-11-19 09:42:44.540560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:24:57.878 [2024-11-19 09:42:44.540567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:24:57.878 [2024-11-19 09:42:44.540574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:24:57.878 [2024-11-19 09:42:44.540581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:24:57.878 [2024-11-19 09:42:44.540588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:24:57.878 [2024-11-19 09:42:44.540595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:24:57.878 [2024-11-19 09:42:44.540602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:57.878 [2024-11-19 09:42:44.540609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:57.878 [2024-11-19 09:42:44.540619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:57.878 [2024-11-19 09:42:44.540626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:57.878 [2024-11-19 09:42:44.540634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:57.878 [2024-11-19 09:42:44.540641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:57.878 [2024-11-19 09:42:44.540648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:57.878 [2024-11-19 09:42:44.540654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:58.140 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 405402 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 405402 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 405402 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:59.083 rmmod nvme_tcp 00:24:59.083 rmmod nvme_fabrics 00:24:59.083 rmmod nvme_keyring 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 405072 ']' 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 405072 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 405072 ']' 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 405072 00:24:59.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (405072) - No such process 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 405072 is not found' 00:24:59.083 Process with pid 405072 is not found 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:59.083 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:24:59.084 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:59.084 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:59.084 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.084 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.084 09:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:01.635 00:25:01.635 real 0m7.897s 00:25:01.635 user 0m19.569s 00:25:01.635 sys 0m1.300s 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:01.635 ************************************ 00:25:01.635 END TEST nvmf_shutdown_tc3 00:25:01.635 ************************************ 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:01.635 ************************************ 00:25:01.635 START TEST nvmf_shutdown_tc4 00:25:01.635 ************************************ 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:01.635 09:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:01.635 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.635 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:01.635 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:01.635 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:01.635 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:01.635 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:01.635 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:01.635 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:25:01.635 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:01.635 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:01.636 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:01.636 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:01.636 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:01.636 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.636 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:01.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:25:01.637 00:25:01.637 --- 10.0.0.2 ping statistics --- 00:25:01.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.637 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:25:01.637 00:25:01.637 --- 10.0.0.1 ping statistics --- 00:25:01.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.637 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=406874 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 406874 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 406874 ']' 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.637 09:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:01.898 [2024-11-19 09:42:48.426400] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:01.898 [2024-11-19 09:42:48.426466] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.898 [2024-11-19 09:42:48.521434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:01.898 [2024-11-19 09:42:48.555403] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.898 [2024-11-19 09:42:48.555435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.898 [2024-11-19 09:42:48.555441] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.898 [2024-11-19 09:42:48.555446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.898 [2024-11-19 09:42:48.555451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.898 [2024-11-19 09:42:48.557051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.898 [2024-11-19 09:42:48.557219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:01.898 [2024-11-19 09:42:48.557534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.898 [2024-11-19 09:42:48.557535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:02.479 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:02.480 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:25:02.480 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:02.480 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:02.480 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:02.740 [2024-11-19 09:42:49.267978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.740 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:02.740 Malloc1 00:25:02.740 [2024-11-19 09:42:49.374312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.740 Malloc2 00:25:02.740 Malloc3 00:25:02.740 Malloc4 00:25:03.001 Malloc5 00:25:03.001 Malloc6 00:25:03.001 Malloc7 00:25:03.001 Malloc8 00:25:03.001 Malloc9 00:25:03.001 Malloc10 00:25:03.001 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.001 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:03.001 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:03.001 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:03.263 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=407258 00:25:03.263 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:25:03.263 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:25:03.263 [2024-11-19 09:42:49.856973] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:08.559 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:08.559 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 406874 00:25:08.559 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 406874 ']' 00:25:08.559 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 406874 00:25:08.559 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:25:08.559 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.559 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 406874 00:25:08.559 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:08.559 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:08.559 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 406874' 00:25:08.559 killing process with pid 406874 00:25:08.559 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 406874 00:25:08.559 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 406874 00:25:08.559 [2024-11-19 09:42:54.854308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(6) to be set 00:25:08.559 [2024-11-19 09:42:54.854356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(6) to be set 00:25:08.559 [2024-11-19 09:42:54.854362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(6) to be set 00:25:08.559 [2024-11-19 09:42:54.854367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(6) to be set 00:25:08.559 [2024-11-19 09:42:54.854744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cff0 is same with the state(6) to be set 00:25:08.559 [2024-11-19 09:42:54.854769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cff0 is same with the state(6) to be set 00:25:08.559 [2024-11-19 09:42:54.855064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c180 is same with the state(6) to be set 00:25:08.559 [2024-11-19 09:42:54.855088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c180 is same with the state(6) to be set 00:25:08.559 [2024-11-19 09:42:54.855095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c180 is same with the state(6) to be set 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 [2024-11-19 09:42:54.860040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 [2024-11-19 09:42:54.860555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127f1a0 is same with the state(6) to be set 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 [2024-11-19 09:42:54.860574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127f1a0 is same with the state(6) to be set 00:25:08.559 [2024-11-19 09:42:54.860581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127f1a0 is same with starting I/O failed: -6 00:25:08.559 the state(6) to be set 00:25:08.559 [2024-11-19 09:42:54.860588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127f1a0 is same with the state(6) to be set 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 [2024-11-19 09:42:54.860593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127f1a0 is same with the state(6) to be set 00:25:08.559 [2024-11-19 09:42:54.860599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127f1a0 is same with the state(6) to be set 00:25:08.559 [2024-11-19 09:42:54.860604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127f1a0 is same with the state(6) to be set 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 Write completed with error (sct=0, sc=8) 00:25:08.559 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 [2024-11-19 09:42:54.860898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 [2024-11-19 09:42:54.861035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127e800 is same with Write completed with error (sct=0, sc=8) 00:25:08.560 the state(6) to be set 00:25:08.560 starting I/O failed: -6 00:25:08.560 [2024-11-19 09:42:54.861056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127e800 is same with the state(6) to be set 00:25:08.560 [2024-11-19 09:42:54.861062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127e800 is same with Write completed with error (sct=0, sc=8) 00:25:08.560 the state(6) to be set 00:25:08.560 [2024-11-19 09:42:54.861069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127e800 is same with the state(6) to be set 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 [2024-11-19 09:42:54.861814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 [2024-11-19 09:42:54.862175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280010 is same with Write completed with error (sct=0, sc=8) 00:25:08.560 the state(6) to be set 00:25:08.560 starting I/O failed: -6 00:25:08.560 [2024-11-19 09:42:54.862200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280010 is same with the state(6) to be set 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 [2024-11-19 09:42:54.862207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280010 is same with the state(6) to be set 00:25:08.560 starting I/O failed: -6 00:25:08.560 [2024-11-19 09:42:54.862213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280010 is same with the state(6) to be set 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 starting I/O failed: -6 00:25:08.560 Write completed with error (sct=0, sc=8) 00:25:08.560 [2024-11-19 09:42:54.862634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12804e0 is same with starting I/O failed: -6 00:25:08.560 the state(6) to be set 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 [2024-11-19 09:42:54.862654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12804e0 is same with the state(6) to be set 00:25:08.561 [2024-11-19 09:42:54.862660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12804e0 is same with the state(6) to be set 00:25:08.561 starting I/O failed: -6 00:25:08.561 [2024-11-19 09:42:54.862665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12804e0 is same with the state(6) to be set 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 [2024-11-19 09:42:54.862860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12809b0 is same with the state(6) to be set 00:25:08.561 [2024-11-19 09:42:54.862881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12809b0 is same with the state(6) to be set 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 [2024-11-19 09:42:54.863254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fb40 is same with the state(6) to be set 00:25:08.561 [2024-11-19 09:42:54.863272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fb40 is same with the state(6) to be set 00:25:08.561 [2024-11-19 09:42:54.863277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fb40 is same with the state(6) to be set 00:25:08.561 [2024-11-19 09:42:54.863282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fb40 is same with the state(6) to be set 00:25:08.561 [2024-11-19 09:42:54.863287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fb40 is same with the state(6) to be set 00:25:08.561 [2024-11-19 09:42:54.863292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fb40 is same with the state(6) to be set 00:25:08.561 [2024-11-19 09:42:54.863297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fb40 is same with the state(6) to be set 00:25:08.561 [2024-11-19 09:42:54.863377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.561 NVMe io qpair process completion error 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 [2024-11-19 09:42:54.864367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 [2024-11-19 09:42:54.865153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 Write completed with error (sct=0, sc=8) 00:25:08.561 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 [2024-11-19 09:42:54.866054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.562 starting I/O failed: -6 00:25:08.562 starting I/O failed: -6 00:25:08.562 starting I/O failed: -6 00:25:08.562 starting I/O failed: -6 00:25:08.562 starting I/O failed: -6 00:25:08.562 starting I/O failed: -6 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 [2024-11-19 09:42:54.867874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.562 NVMe io qpair process completion error 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 starting I/O failed: -6 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.562 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 [2024-11-19 09:42:54.869085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 [2024-11-19 09:42:54.869892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 [2024-11-19 09:42:54.870808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.563 starting I/O failed: -6 00:25:08.563 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 [2024-11-19 09:42:54.873809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.564 NVMe io qpair process completion error 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 [2024-11-19 09:42:54.875021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 Write completed with error (sct=0, sc=8) 00:25:08.564 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 [2024-11-19 09:42:54.875842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 [2024-11-19 09:42:54.876785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 Write completed with error (sct=0, sc=8) 00:25:08.565 starting I/O failed: -6 00:25:08.565 [2024-11-19 09:42:54.879030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.565 NVMe io qpair process completion error 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 [2024-11-19 09:42:54.880065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 [2024-11-19 09:42:54.880878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 Write completed with error (sct=0, sc=8) 00:25:08.566 starting I/O failed: -6 00:25:08.566 [2024-11-19 09:42:54.881804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.566 starting I/O failed: -6 00:25:08.566 starting I/O failed: -6 00:25:08.566 starting I/O failed: -6 00:25:08.566 starting I/O failed: -6 00:25:08.566 starting I/O failed: -6 00:25:08.566 starting I/O failed: -6 00:25:08.566 starting I/O failed: -6 00:25:08.566 starting I/O failed: -6 00:25:08.566 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 [2024-11-19 09:42:54.883818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.567 NVMe io qpair process completion error 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 [2024-11-19 09:42:54.885233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 [2024-11-19 09:42:54.886059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.567 starting I/O failed: -6 00:25:08.567 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 [2024-11-19 09:42:54.886990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.568 starting I/O failed: -6 00:25:08.568 starting I/O failed: -6 00:25:08.568 starting I/O failed: -6 00:25:08.568 starting I/O failed: -6 00:25:08.568 starting I/O failed: -6 00:25:08.568 starting I/O failed: -6 00:25:08.568 starting I/O failed: -6 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 [2024-11-19 09:42:54.890256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.568 NVMe io qpair process completion error 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 starting I/O failed: -6 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.568 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 [2024-11-19 09:42:54.891374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.569 starting I/O failed: -6 00:25:08.569 starting I/O failed: -6 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 [2024-11-19 09:42:54.892349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 [2024-11-19 09:42:54.893261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.569 Write completed with error (sct=0, sc=8) 00:25:08.569 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 [2024-11-19 09:42:54.894703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.570 NVMe io qpair process completion error 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 [2024-11-19 09:42:54.895932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.570 starting I/O failed: -6 00:25:08.570 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 [2024-11-19 09:42:54.896865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 [2024-11-19 09:42:54.897765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.571 Write completed with error (sct=0, sc=8) 00:25:08.571 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 [2024-11-19 09:42:54.901562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.572 NVMe io qpair process completion error 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 [2024-11-19 09:42:54.902614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 [2024-11-19 09:42:54.903425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.572 [2024-11-19 09:42:54.904353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.572 Write completed with error (sct=0, sc=8) 00:25:08.572 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 [2024-11-19 09:42:54.905998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.573 NVMe io qpair process completion error 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 [2024-11-19 09:42:54.907257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 Write completed with error (sct=0, sc=8) 00:25:08.573 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 [2024-11-19 09:42:54.908076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 [2024-11-19 09:42:54.908997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.574 Write completed with error (sct=0, sc=8) 00:25:08.574 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 Write completed with error (sct=0, sc=8) 00:25:08.575 starting I/O failed: -6 00:25:08.575 [2024-11-19 09:42:54.912127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:08.575 NVMe io qpair process completion error 00:25:08.575 Initializing NVMe Controllers 00:25:08.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:25:08.575 Controller IO queue size 128, less than required. 00:25:08.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:25:08.575 Controller IO queue size 128, less than required. 00:25:08.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:25:08.575 Controller IO queue size 128, less than required. 00:25:08.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:25:08.575 Controller IO queue size 128, less than required. 00:25:08.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:25:08.575 Controller IO queue size 128, less than required. 00:25:08.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:25:08.575 Controller IO queue size 128, less than required. 00:25:08.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:08.575 Controller IO queue size 128, less than required. 00:25:08.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:25:08.575 Controller IO queue size 128, less than required. 00:25:08.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:25:08.575 Controller IO queue size 128, less than required. 00:25:08.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:25:08.575 Controller IO queue size 128, less than required. 00:25:08.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:25:08.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:25:08.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:25:08.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:25:08.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:25:08.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:25:08.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:08.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:25:08.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:25:08.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:25:08.575 Initialization complete. Launching workers. 00:25:08.575 ======================================================== 00:25:08.575 Latency(us) 00:25:08.575 Device Information : IOPS MiB/s Average min max 00:25:08.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1913.13 82.20 66920.20 899.28 116920.82 00:25:08.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1890.41 81.23 67744.57 832.25 149694.25 00:25:08.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1899.54 81.62 67441.07 843.71 150427.33 00:25:08.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1919.93 82.50 66766.91 697.80 120203.47 00:25:08.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1927.57 82.83 66531.01 810.71 121913.13 00:25:08.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1935.22 83.15 66299.17 646.27 116326.83 00:25:08.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1849.21 79.46 69425.20 696.64 127329.96 00:25:08.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1918.87 82.45 66925.61 930.07 129494.25 00:25:08.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1940.53 83.38 66227.78 828.98 119434.49 00:25:08.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1917.59 82.40 67042.41 819.93 135139.67 00:25:08.575 ======================================================== 00:25:08.575 Total : 19112.01 821.22 67120.85 646.27 150427.33 00:25:08.575 00:25:08.575 [2024-11-19 09:42:54.916363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aae0 is same with the state(6) to be set 00:25:08.575 [2024-11-19 09:42:54.916407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1408560 is same with the state(6) to be set 00:25:08.575 [2024-11-19 09:42:54.916436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1408890 is same with the state(6) to be set 00:25:08.575 [2024-11-19 09:42:54.916465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1409740 is same with the state(6) to be set 00:25:08.575 [2024-11-19 09:42:54.916494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140a900 is same with the state(6) to be set 00:25:08.575 [2024-11-19 09:42:54.916524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1408ef0 is same with the state(6) to be set 00:25:08.575 [2024-11-19 09:42:54.916553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140a720 is same with the state(6) to be set 00:25:08.575 [2024-11-19 09:42:54.916582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1409a70 is same with the state(6) to be set 00:25:08.575 [2024-11-19 09:42:54.916610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1408bc0 is same with the state(6) to be set 00:25:08.575 [2024-11-19 09:42:54.916638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1409410 is same with the state(6) to be set 00:25:08.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:25:08.575 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 407258 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 407258 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 407258 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:09.520 rmmod nvme_tcp 00:25:09.520 rmmod nvme_fabrics 00:25:09.520 rmmod nvme_keyring 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 406874 ']' 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 406874 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 406874 ']' 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 406874 00:25:09.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (406874) - No such process 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 406874 is not found' 00:25:09.520 Process with pid 406874 is not found 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.520 09:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.079 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:12.079 00:25:12.079 real 0m10.287s 00:25:12.079 user 0m27.951s 00:25:12.079 sys 0m4.008s 00:25:12.079 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:12.079 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:12.079 ************************************ 00:25:12.079 END TEST nvmf_shutdown_tc4 00:25:12.079 ************************************ 00:25:12.079 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:25:12.079 00:25:12.079 real 0m43.646s 00:25:12.079 user 1m46.518s 00:25:12.079 sys 0m13.784s 00:25:12.079 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:12.079 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:12.079 ************************************ 00:25:12.079 END TEST nvmf_shutdown 00:25:12.079 ************************************ 00:25:12.079 09:42:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:25:12.079 09:42:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:12.079 09:42:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:12.079 09:42:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:12.079 ************************************ 00:25:12.079 START TEST nvmf_nsid 00:25:12.079 ************************************ 00:25:12.079 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:25:12.079 * Looking for test storage... 00:25:12.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:12.079 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:12.079 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:12.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.080 --rc genhtml_branch_coverage=1 00:25:12.080 --rc genhtml_function_coverage=1 00:25:12.080 --rc genhtml_legend=1 00:25:12.080 --rc geninfo_all_blocks=1 00:25:12.080 --rc geninfo_unexecuted_blocks=1 00:25:12.080 00:25:12.080 ' 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:12.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.080 --rc genhtml_branch_coverage=1 00:25:12.080 --rc genhtml_function_coverage=1 00:25:12.080 --rc genhtml_legend=1 00:25:12.080 --rc geninfo_all_blocks=1 00:25:12.080 --rc geninfo_unexecuted_blocks=1 00:25:12.080 00:25:12.080 ' 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:12.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.080 --rc genhtml_branch_coverage=1 00:25:12.080 --rc genhtml_function_coverage=1 00:25:12.080 --rc genhtml_legend=1 00:25:12.080 --rc geninfo_all_blocks=1 00:25:12.080 --rc geninfo_unexecuted_blocks=1 00:25:12.080 00:25:12.080 ' 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:12.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.080 --rc genhtml_branch_coverage=1 00:25:12.080 --rc genhtml_function_coverage=1 00:25:12.080 --rc genhtml_legend=1 00:25:12.080 --rc geninfo_all_blocks=1 00:25:12.080 --rc geninfo_unexecuted_blocks=1 00:25:12.080 00:25:12.080 ' 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.080 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:12.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:25:12.081 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:20.242 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:20.242 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:20.242 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:20.242 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:20.242 09:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.242 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.242 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.242 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:20.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:25:20.243 00:25:20.243 --- 10.0.0.2 ping statistics --- 00:25:20.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.243 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:25:20.243 00:25:20.243 --- 10.0.0.1 ping statistics --- 00:25:20.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.243 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=412607 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 412607 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 412607 ']' 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:20.243 [2024-11-19 09:43:06.163338] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:20.243 [2024-11-19 09:43:06.163401] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.243 [2024-11-19 09:43:06.260942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.243 [2024-11-19 09:43:06.311830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.243 [2024-11-19 09:43:06.311879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.243 [2024-11-19 09:43:06.311889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.243 [2024-11-19 09:43:06.311896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.243 [2024-11-19 09:43:06.311903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.243 [2024-11-19 09:43:06.312672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:20.243 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=412702 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=e2a990e1-3057-4521-bbf5-e47bceb3d0b7 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=8f48c5dc-f270-47f6-90e7-d73d6a6358d9 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=653a46dd-a18e-4207-a07b-1c04ce6d4eda 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:20.505 null0 00:25:20.505 null1 00:25:20.505 [2024-11-19 09:43:07.083138] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:20.505 [2024-11-19 09:43:07.083223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412702 ] 00:25:20.505 null2 00:25:20.505 [2024-11-19 09:43:07.090285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.505 [2024-11-19 09:43:07.114539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 412702 /var/tmp/tgt2.sock 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 412702 ']' 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:25:20.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.505 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:20.505 [2024-11-19 09:43:07.173391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.505 [2024-11-19 09:43:07.226730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.767 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:20.767 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:25:20.767 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:25:21.339 [2024-11-19 09:43:07.788608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.339 [2024-11-19 09:43:07.804782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:25:21.339 nvme0n1 nvme0n2 00:25:21.339 nvme1n1 00:25:21.339 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:25:21.339 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:25:21.339 09:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:22.726 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:25:22.726 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:25:22.726 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:25:22.726 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:25:22.726 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:25:22.726 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:25:22.726 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:25:22.726 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:22.726 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:22.726 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:22.726 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:25:22.726 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:25:22.726 09:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid e2a990e1-3057-4521-bbf5-e47bceb3d0b7 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e2a990e130574521bbf5e47bceb3d0b7 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E2A990E130574521BBF5E47BCEB3D0B7 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ E2A990E130574521BBF5E47BCEB3D0B7 == \E\2\A\9\9\0\E\1\3\0\5\7\4\5\2\1\B\B\F\5\E\4\7\B\C\E\B\3\D\0\B\7 ]] 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:25:23.669 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 8f48c5dc-f270-47f6-90e7-d73d6a6358d9 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8f48c5dcf27047f690e7d73d6a6358d9 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8F48C5DCF27047F690E7D73D6A6358D9 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 8F48C5DCF27047F690E7D73D6A6358D9 == \8\F\4\8\C\5\D\C\F\2\7\0\4\7\F\6\9\0\E\7\D\7\3\D\6\A\6\3\5\8\D\9 ]] 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 653a46dd-a18e-4207-a07b-1c04ce6d4eda 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=653a46dda18e4207a07b1c04ce6d4eda 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 653A46DDA18E4207A07B1C04CE6D4EDA 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 653A46DDA18E4207A07B1C04CE6D4EDA == \6\5\3\A\4\6\D\D\A\1\8\E\4\2\0\7\A\0\7\B\1\C\0\4\C\E\6\D\4\E\D\A ]] 00:25:23.931 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:25:24.192 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:25:24.192 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:25:24.192 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 412702 00:25:24.192 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 412702 ']' 00:25:24.192 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 412702 00:25:24.192 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:25:24.192 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:24.192 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 412702 00:25:24.192 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:24.192 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:24.192 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 412702' 00:25:24.192 killing process with pid 412702 00:25:24.192 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 412702 00:25:24.192 09:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 412702 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:24.453 rmmod nvme_tcp 00:25:24.453 rmmod nvme_fabrics 00:25:24.453 rmmod nvme_keyring 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 412607 ']' 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 412607 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 412607 ']' 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 412607 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 412607 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 412607' 00:25:24.453 killing process with pid 412607 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 412607 00:25:24.453 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 412607 00:25:24.714 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:24.714 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:24.714 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:24.714 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:25:24.714 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:25:24.714 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:24.714 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:25:24.714 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:24.714 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:24.714 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.714 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.714 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.628 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:26.628 00:25:26.628 real 0m14.925s 00:25:26.628 user 0m11.398s 00:25:26.628 sys 0m6.889s 00:25:26.628 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:26.628 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:26.628 ************************************ 00:25:26.628 END TEST nvmf_nsid 00:25:26.628 ************************************ 00:25:26.888 09:43:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:25:26.888 00:25:26.888 real 13m6.805s 00:25:26.888 user 27m28.670s 00:25:26.888 sys 3m52.865s 00:25:26.888 09:43:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:26.888 09:43:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:26.888 ************************************ 00:25:26.888 END TEST nvmf_target_extra 00:25:26.888 ************************************ 00:25:26.888 09:43:13 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:26.888 09:43:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:26.888 09:43:13 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:26.888 09:43:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:26.888 ************************************ 00:25:26.888 START TEST nvmf_host 00:25:26.888 ************************************ 00:25:26.888 09:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:26.888 * Looking for test storage... 00:25:26.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:25:26.888 09:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:26.888 09:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:26.888 09:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:27.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.150 --rc genhtml_branch_coverage=1 00:25:27.150 --rc genhtml_function_coverage=1 00:25:27.150 --rc genhtml_legend=1 00:25:27.150 --rc geninfo_all_blocks=1 00:25:27.150 --rc geninfo_unexecuted_blocks=1 00:25:27.150 00:25:27.150 ' 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:27.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.150 --rc genhtml_branch_coverage=1 00:25:27.150 --rc genhtml_function_coverage=1 00:25:27.150 --rc genhtml_legend=1 00:25:27.150 --rc geninfo_all_blocks=1 00:25:27.150 --rc geninfo_unexecuted_blocks=1 00:25:27.150 00:25:27.150 ' 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:27.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.150 --rc genhtml_branch_coverage=1 00:25:27.150 --rc genhtml_function_coverage=1 00:25:27.150 --rc genhtml_legend=1 00:25:27.150 --rc geninfo_all_blocks=1 00:25:27.150 --rc geninfo_unexecuted_blocks=1 00:25:27.150 00:25:27.150 ' 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:27.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.150 --rc genhtml_branch_coverage=1 00:25:27.150 --rc genhtml_function_coverage=1 00:25:27.150 --rc genhtml_legend=1 00:25:27.150 --rc geninfo_all_blocks=1 00:25:27.150 --rc geninfo_unexecuted_blocks=1 00:25:27.150 00:25:27.150 ' 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:27.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:27.150 09:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:27.151 09:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.151 ************************************ 00:25:27.151 START TEST nvmf_multicontroller 00:25:27.151 ************************************ 00:25:27.151 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:27.151 * Looking for test storage... 00:25:27.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:27.151 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:27.151 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:25:27.151 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:27.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.412 --rc genhtml_branch_coverage=1 00:25:27.412 --rc genhtml_function_coverage=1 00:25:27.412 --rc genhtml_legend=1 00:25:27.412 --rc geninfo_all_blocks=1 00:25:27.412 --rc geninfo_unexecuted_blocks=1 00:25:27.412 00:25:27.412 ' 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:27.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.412 --rc genhtml_branch_coverage=1 00:25:27.412 --rc genhtml_function_coverage=1 00:25:27.412 --rc genhtml_legend=1 00:25:27.412 --rc geninfo_all_blocks=1 00:25:27.412 --rc geninfo_unexecuted_blocks=1 00:25:27.412 00:25:27.412 ' 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:27.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.412 --rc genhtml_branch_coverage=1 00:25:27.412 --rc genhtml_function_coverage=1 00:25:27.412 --rc genhtml_legend=1 00:25:27.412 --rc geninfo_all_blocks=1 00:25:27.412 --rc geninfo_unexecuted_blocks=1 00:25:27.412 00:25:27.412 ' 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:27.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.412 --rc genhtml_branch_coverage=1 00:25:27.412 --rc genhtml_function_coverage=1 00:25:27.412 --rc genhtml_legend=1 00:25:27.412 --rc geninfo_all_blocks=1 00:25:27.412 --rc geninfo_unexecuted_blocks=1 00:25:27.412 00:25:27.412 ' 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.412 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:27.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:25:27.413 09:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:35.551 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:35.551 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:35.551 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:35.551 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:35.551 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:35.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:35.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.785 ms 00:25:35.552 00:25:35.552 --- 10.0.0.2 ping statistics --- 00:25:35.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.552 rtt min/avg/max/mdev = 0.785/0.785/0.785/0.000 ms 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:35.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:35.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:25:35.552 00:25:35.552 --- 10.0.0.1 ping statistics --- 00:25:35.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.552 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=417774 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 417774 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 417774 ']' 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:35.552 09:43:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.552 [2024-11-19 09:43:21.523356] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:35.552 [2024-11-19 09:43:21.523423] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.552 [2024-11-19 09:43:21.624719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:35.552 [2024-11-19 09:43:21.677385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.552 [2024-11-19 09:43:21.677436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.552 [2024-11-19 09:43:21.677445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.552 [2024-11-19 09:43:21.677452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.552 [2024-11-19 09:43:21.677458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.552 [2024-11-19 09:43:21.679230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.552 [2024-11-19 09:43:21.679467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:35.552 [2024-11-19 09:43:21.679469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.813 [2024-11-19 09:43:22.407641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.813 Malloc0 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.813 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.813 [2024-11-19 09:43:22.478515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.814 [2024-11-19 09:43:22.490431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.814 Malloc1 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.814 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:36.075 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.075 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=418091 00:25:36.075 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:36.075 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:36.075 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 418091 /var/tmp/bdevperf.sock 00:25:36.075 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 418091 ']' 00:25:36.075 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:36.075 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:36.075 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:36.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:36.075 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:36.075 09:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.014 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:37.014 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:37.014 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:37.014 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.014 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.014 NVMe0n1 00:25:37.014 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.014 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:37.014 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:37.014 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.014 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.274 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.274 1 00:25:37.274 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:37.274 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:37.274 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:37.274 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:37.274 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.274 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:37.274 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.274 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:37.274 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.274 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.274 request: 00:25:37.274 { 00:25:37.274 "name": "NVMe0", 00:25:37.274 "trtype": "tcp", 00:25:37.274 "traddr": "10.0.0.2", 00:25:37.274 "adrfam": "ipv4", 00:25:37.274 "trsvcid": "4420", 00:25:37.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:37.274 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:37.274 "hostaddr": "10.0.0.1", 00:25:37.274 "prchk_reftag": false, 00:25:37.274 "prchk_guard": false, 00:25:37.274 "hdgst": false, 00:25:37.274 "ddgst": false, 00:25:37.274 "allow_unrecognized_csi": false, 00:25:37.274 "method": "bdev_nvme_attach_controller", 00:25:37.274 "req_id": 1 00:25:37.274 } 00:25:37.274 Got JSON-RPC error response 00:25:37.275 response: 00:25:37.275 { 00:25:37.275 "code": -114, 00:25:37.275 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:37.275 } 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.275 request: 00:25:37.275 { 00:25:37.275 "name": "NVMe0", 00:25:37.275 "trtype": "tcp", 00:25:37.275 "traddr": "10.0.0.2", 00:25:37.275 "adrfam": "ipv4", 00:25:37.275 "trsvcid": "4420", 00:25:37.275 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:37.275 "hostaddr": "10.0.0.1", 00:25:37.275 "prchk_reftag": false, 00:25:37.275 "prchk_guard": false, 00:25:37.275 "hdgst": false, 00:25:37.275 "ddgst": false, 00:25:37.275 "allow_unrecognized_csi": false, 00:25:37.275 "method": "bdev_nvme_attach_controller", 00:25:37.275 "req_id": 1 00:25:37.275 } 00:25:37.275 Got JSON-RPC error response 00:25:37.275 response: 00:25:37.275 { 00:25:37.275 "code": -114, 00:25:37.275 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:37.275 } 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.275 request: 00:25:37.275 { 00:25:37.275 "name": "NVMe0", 00:25:37.275 "trtype": "tcp", 00:25:37.275 "traddr": "10.0.0.2", 00:25:37.275 "adrfam": "ipv4", 00:25:37.275 "trsvcid": "4420", 00:25:37.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:37.275 "hostaddr": "10.0.0.1", 00:25:37.275 "prchk_reftag": false, 00:25:37.275 "prchk_guard": false, 00:25:37.275 "hdgst": false, 00:25:37.275 "ddgst": false, 00:25:37.275 "multipath": "disable", 00:25:37.275 "allow_unrecognized_csi": false, 00:25:37.275 "method": "bdev_nvme_attach_controller", 00:25:37.275 "req_id": 1 00:25:37.275 } 00:25:37.275 Got JSON-RPC error response 00:25:37.275 response: 00:25:37.275 { 00:25:37.275 "code": -114, 00:25:37.275 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:25:37.275 } 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.275 request: 00:25:37.275 { 00:25:37.275 "name": "NVMe0", 00:25:37.275 "trtype": "tcp", 00:25:37.275 "traddr": "10.0.0.2", 00:25:37.275 "adrfam": "ipv4", 00:25:37.275 "trsvcid": "4420", 00:25:37.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:37.275 "hostaddr": "10.0.0.1", 00:25:37.275 "prchk_reftag": false, 00:25:37.275 "prchk_guard": false, 00:25:37.275 "hdgst": false, 00:25:37.275 "ddgst": false, 00:25:37.275 "multipath": "failover", 00:25:37.275 "allow_unrecognized_csi": false, 00:25:37.275 "method": "bdev_nvme_attach_controller", 00:25:37.275 "req_id": 1 00:25:37.275 } 00:25:37.275 Got JSON-RPC error response 00:25:37.275 response: 00:25:37.275 { 00:25:37.275 "code": -114, 00:25:37.275 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:37.275 } 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.275 09:43:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.534 NVMe0n1 00:25:37.534 09:43:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.534 09:43:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:37.534 09:43:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.534 09:43:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.534 09:43:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.534 09:43:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:37.534 09:43:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.534 09:43:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.794 00:25:37.794 09:43:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.794 09:43:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:37.794 09:43:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:37.794 09:43:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.794 09:43:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.794 09:43:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.794 09:43:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:37.794 09:43:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:38.734 { 00:25:38.734 "results": [ 00:25:38.734 { 00:25:38.734 "job": "NVMe0n1", 00:25:38.734 "core_mask": "0x1", 00:25:38.734 "workload": "write", 00:25:38.734 "status": "finished", 00:25:38.734 "queue_depth": 128, 00:25:38.734 "io_size": 4096, 00:25:38.734 "runtime": 1.00553, 00:25:38.734 "iops": 28809.682456018218, 00:25:38.734 "mibps": 112.53782209382116, 00:25:38.734 "io_failed": 0, 00:25:38.734 "io_timeout": 0, 00:25:38.734 "avg_latency_us": 4432.84765553983, 00:25:38.734 "min_latency_us": 2143.5733333333333, 00:25:38.734 "max_latency_us": 13161.813333333334 00:25:38.734 } 00:25:38.734 ], 00:25:38.734 "core_count": 1 00:25:38.734 } 00:25:38.734 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:38.734 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.734 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.734 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.734 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:25:38.734 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 418091 00:25:38.734 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 418091 ']' 00:25:38.734 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 418091 00:25:38.734 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:25:38.734 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:38.734 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 418091 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 418091' 00:25:38.995 killing process with pid 418091 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 418091 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 418091 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:25:38.995 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:38.995 [2024-11-19 09:43:22.622442] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:38.995 [2024-11-19 09:43:22.622518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418091 ] 00:25:38.995 [2024-11-19 09:43:22.715862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.995 [2024-11-19 09:43:22.767762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.995 [2024-11-19 09:43:24.286833] bdev.c:4686:bdev_name_add: *ERROR*: Bdev name 8278b76b-e8c7-490f-9320-3a7aadf286eb already exists 00:25:38.995 [2024-11-19 09:43:24.286865] bdev.c:7824:bdev_register: *ERROR*: Unable to add uuid:8278b76b-e8c7-490f-9320-3a7aadf286eb alias for bdev NVMe1n1 00:25:38.995 [2024-11-19 09:43:24.286874] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:38.995 Running I/O for 1 seconds... 00:25:38.995 28777.00 IOPS, 112.41 MiB/s 00:25:38.995 Latency(us) 00:25:38.995 [2024-11-19T08:43:25.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.995 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:38.995 NVMe0n1 : 1.01 28809.68 112.54 0.00 0.00 4432.85 2143.57 13161.81 00:25:38.995 [2024-11-19T08:43:25.743Z] =================================================================================================================== 00:25:38.995 [2024-11-19T08:43:25.743Z] Total : 28809.68 112.54 0.00 0.00 4432.85 2143.57 13161.81 00:25:38.995 Received shutdown signal, test time was about 1.000000 seconds 00:25:38.995 00:25:38.995 Latency(us) 00:25:38.995 [2024-11-19T08:43:25.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.995 [2024-11-19T08:43:25.743Z] =================================================================================================================== 00:25:38.995 [2024-11-19T08:43:25.743Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:38.995 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:38.995 rmmod nvme_tcp 00:25:38.995 rmmod nvme_fabrics 00:25:38.995 rmmod nvme_keyring 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 417774 ']' 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 417774 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 417774 ']' 00:25:38.995 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 417774 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 417774 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 417774' 00:25:39.257 killing process with pid 417774 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 417774 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 417774 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.257 09:43:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:41.800 00:25:41.800 real 0m14.281s 00:25:41.800 user 0m18.378s 00:25:41.800 sys 0m6.519s 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:41.800 ************************************ 00:25:41.800 END TEST nvmf_multicontroller 00:25:41.800 ************************************ 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.800 ************************************ 00:25:41.800 START TEST nvmf_aer 00:25:41.800 ************************************ 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:41.800 * Looking for test storage... 00:25:41.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:25:41.800 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:41.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.801 --rc genhtml_branch_coverage=1 00:25:41.801 --rc genhtml_function_coverage=1 00:25:41.801 --rc genhtml_legend=1 00:25:41.801 --rc geninfo_all_blocks=1 00:25:41.801 --rc geninfo_unexecuted_blocks=1 00:25:41.801 00:25:41.801 ' 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:41.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.801 --rc genhtml_branch_coverage=1 00:25:41.801 --rc genhtml_function_coverage=1 00:25:41.801 --rc genhtml_legend=1 00:25:41.801 --rc geninfo_all_blocks=1 00:25:41.801 --rc geninfo_unexecuted_blocks=1 00:25:41.801 00:25:41.801 ' 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:41.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.801 --rc genhtml_branch_coverage=1 00:25:41.801 --rc genhtml_function_coverage=1 00:25:41.801 --rc genhtml_legend=1 00:25:41.801 --rc geninfo_all_blocks=1 00:25:41.801 --rc geninfo_unexecuted_blocks=1 00:25:41.801 00:25:41.801 ' 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:41.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.801 --rc genhtml_branch_coverage=1 00:25:41.801 --rc genhtml_function_coverage=1 00:25:41.801 --rc genhtml_legend=1 00:25:41.801 --rc geninfo_all_blocks=1 00:25:41.801 --rc geninfo_unexecuted_blocks=1 00:25:41.801 00:25:41.801 ' 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:41.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:25:41.801 09:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:49.948 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:49.948 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:49.948 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:49.948 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.948 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:49.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:25:49.949 00:25:49.949 --- 10.0.0.2 ping statistics --- 00:25:49.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.949 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:25:49.949 00:25:49.949 --- 10.0.0.1 ping statistics --- 00:25:49.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.949 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=422799 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 422799 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 422799 ']' 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.949 09:43:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.949 [2024-11-19 09:43:35.837416] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:49.949 [2024-11-19 09:43:35.837481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.949 [2024-11-19 09:43:35.938075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:49.949 [2024-11-19 09:43:35.992493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.949 [2024-11-19 09:43:35.992549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.949 [2024-11-19 09:43:35.992558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.949 [2024-11-19 09:43:35.992566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.949 [2024-11-19 09:43:35.992572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.949 [2024-11-19 09:43:35.994599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.949 [2024-11-19 09:43:35.994760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.949 [2024-11-19 09:43:35.994914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:49.949 [2024-11-19 09:43:35.994915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.949 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:49.949 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:25:49.949 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:49.949 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:49.949 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:50.210 [2024-11-19 09:43:36.719232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:50.210 Malloc0 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:50.210 [2024-11-19 09:43:36.798655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.210 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:50.211 [ 00:25:50.211 { 00:25:50.211 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:50.211 "subtype": "Discovery", 00:25:50.211 "listen_addresses": [], 00:25:50.211 "allow_any_host": true, 00:25:50.211 "hosts": [] 00:25:50.211 }, 00:25:50.211 { 00:25:50.211 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:50.211 "subtype": "NVMe", 00:25:50.211 "listen_addresses": [ 00:25:50.211 { 00:25:50.211 "trtype": "TCP", 00:25:50.211 "adrfam": "IPv4", 00:25:50.211 "traddr": "10.0.0.2", 00:25:50.211 "trsvcid": "4420" 00:25:50.211 } 00:25:50.211 ], 00:25:50.211 "allow_any_host": true, 00:25:50.211 "hosts": [], 00:25:50.211 "serial_number": "SPDK00000000000001", 00:25:50.211 "model_number": "SPDK bdev Controller", 00:25:50.211 "max_namespaces": 2, 00:25:50.211 "min_cntlid": 1, 00:25:50.211 "max_cntlid": 65519, 00:25:50.211 "namespaces": [ 00:25:50.211 { 00:25:50.211 "nsid": 1, 00:25:50.211 "bdev_name": "Malloc0", 00:25:50.211 "name": "Malloc0", 00:25:50.211 "nguid": "CC08D5E4AF24461AAA790A65D5CE7D0F", 00:25:50.211 "uuid": "cc08d5e4-af24-461a-aa79-0a65d5ce7d0f" 00:25:50.211 } 00:25:50.211 ] 00:25:50.211 } 00:25:50.211 ] 00:25:50.211 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.211 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:50.211 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:50.211 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=423131 00:25:50.211 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:50.211 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:50.211 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:25:50.211 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:50.211 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:25:50.211 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:25:50.211 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:50.211 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:50.211 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:25:50.211 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:25:50.211 09:43:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:50.472 Malloc1 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:50.472 Asynchronous Event Request test 00:25:50.472 Attaching to 10.0.0.2 00:25:50.472 Attached to 10.0.0.2 00:25:50.472 Registering asynchronous event callbacks... 00:25:50.472 Starting namespace attribute notice tests for all controllers... 00:25:50.472 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:50.472 aer_cb - Changed Namespace 00:25:50.472 Cleaning up... 00:25:50.472 [ 00:25:50.472 { 00:25:50.472 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:50.472 "subtype": "Discovery", 00:25:50.472 "listen_addresses": [], 00:25:50.472 "allow_any_host": true, 00:25:50.472 "hosts": [] 00:25:50.472 }, 00:25:50.472 { 00:25:50.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:50.472 "subtype": "NVMe", 00:25:50.472 "listen_addresses": [ 00:25:50.472 { 00:25:50.472 "trtype": "TCP", 00:25:50.472 "adrfam": "IPv4", 00:25:50.472 "traddr": "10.0.0.2", 00:25:50.472 "trsvcid": "4420" 00:25:50.472 } 00:25:50.472 ], 00:25:50.472 "allow_any_host": true, 00:25:50.472 "hosts": [], 00:25:50.472 "serial_number": "SPDK00000000000001", 00:25:50.472 "model_number": "SPDK bdev Controller", 00:25:50.472 "max_namespaces": 2, 00:25:50.472 "min_cntlid": 1, 00:25:50.472 "max_cntlid": 65519, 00:25:50.472 "namespaces": [ 00:25:50.472 { 00:25:50.472 "nsid": 1, 00:25:50.472 "bdev_name": "Malloc0", 00:25:50.472 "name": "Malloc0", 00:25:50.472 "nguid": "CC08D5E4AF24461AAA790A65D5CE7D0F", 00:25:50.472 "uuid": "cc08d5e4-af24-461a-aa79-0a65d5ce7d0f" 00:25:50.472 }, 00:25:50.472 { 00:25:50.472 "nsid": 2, 00:25:50.472 "bdev_name": "Malloc1", 00:25:50.472 "name": "Malloc1", 00:25:50.472 "nguid": "3E0F458DB4F84634A6DE20D661A92870", 00:25:50.472 "uuid": "3e0f458d-b4f8-4634-a6de-20d661a92870" 00:25:50.472 } 00:25:50.472 ] 00:25:50.472 } 00:25:50.472 ] 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 423131 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:50.472 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:50.472 rmmod nvme_tcp 00:25:50.472 rmmod nvme_fabrics 00:25:50.733 rmmod nvme_keyring 00:25:50.733 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:50.733 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:25:50.733 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:25:50.733 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 422799 ']' 00:25:50.733 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 422799 00:25:50.733 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 422799 ']' 00:25:50.733 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 422799 00:25:50.733 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:25:50.733 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:50.733 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 422799 00:25:50.733 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:50.733 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:50.733 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 422799' 00:25:50.733 killing process with pid 422799 00:25:50.733 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 422799 00:25:50.733 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 422799 00:25:50.993 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:50.993 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:50.993 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:50.993 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:25:50.994 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:25:50.994 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:50.994 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:25:50.994 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:50.994 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:50.994 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.994 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.994 09:43:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.908 09:43:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:52.908 00:25:52.908 real 0m11.477s 00:25:52.908 user 0m8.212s 00:25:52.908 sys 0m6.106s 00:25:52.908 09:43:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:52.908 09:43:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:52.908 ************************************ 00:25:52.908 END TEST nvmf_aer 00:25:52.908 ************************************ 00:25:52.908 09:43:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:52.908 09:43:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:52.908 09:43:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:52.908 09:43:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.908 ************************************ 00:25:52.908 START TEST nvmf_async_init 00:25:52.908 ************************************ 00:25:52.908 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:53.170 * Looking for test storage... 00:25:53.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:53.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.170 --rc genhtml_branch_coverage=1 00:25:53.170 --rc genhtml_function_coverage=1 00:25:53.170 --rc genhtml_legend=1 00:25:53.170 --rc geninfo_all_blocks=1 00:25:53.170 --rc geninfo_unexecuted_blocks=1 00:25:53.170 00:25:53.170 ' 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:53.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.170 --rc genhtml_branch_coverage=1 00:25:53.170 --rc genhtml_function_coverage=1 00:25:53.170 --rc genhtml_legend=1 00:25:53.170 --rc geninfo_all_blocks=1 00:25:53.170 --rc geninfo_unexecuted_blocks=1 00:25:53.170 00:25:53.170 ' 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:53.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.170 --rc genhtml_branch_coverage=1 00:25:53.170 --rc genhtml_function_coverage=1 00:25:53.170 --rc genhtml_legend=1 00:25:53.170 --rc geninfo_all_blocks=1 00:25:53.170 --rc geninfo_unexecuted_blocks=1 00:25:53.170 00:25:53.170 ' 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:53.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.170 --rc genhtml_branch_coverage=1 00:25:53.170 --rc genhtml_function_coverage=1 00:25:53.170 --rc genhtml_legend=1 00:25:53.170 --rc geninfo_all_blocks=1 00:25:53.170 --rc geninfo_unexecuted_blocks=1 00:25:53.170 00:25:53.170 ' 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.170 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:53.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=42805719842548f4813fd1167f919053 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:25:53.171 09:43:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:01.310 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:01.310 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.310 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:01.311 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:01.311 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:01.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:26:01.311 00:26:01.311 --- 10.0.0.2 ping statistics --- 00:26:01.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.311 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:01.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:26:01.311 00:26:01.311 --- 10.0.0.1 ping statistics --- 00:26:01.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.311 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=427456 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 427456 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 427456 ']' 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:01.311 09:43:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:01.311 [2024-11-19 09:43:47.441987] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:01.311 [2024-11-19 09:43:47.442051] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.311 [2024-11-19 09:43:47.542340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.311 [2024-11-19 09:43:47.593967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.311 [2024-11-19 09:43:47.594016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.311 [2024-11-19 09:43:47.594025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.311 [2024-11-19 09:43:47.594033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.311 [2024-11-19 09:43:47.594040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.311 [2024-11-19 09:43:47.594784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.572 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:01.572 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:26:01.572 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:01.572 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:01.572 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:01.572 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.572 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:01.572 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.572 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:01.572 [2024-11-19 09:43:48.303702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.572 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.572 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:01.572 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.573 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:01.834 null0 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 42805719842548f4813fd1167f919053 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:01.834 [2024-11-19 09:43:48.364035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.834 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:02.096 nvme0n1 00:26:02.096 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.096 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:02.096 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.096 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:02.096 [ 00:26:02.096 { 00:26:02.096 "name": "nvme0n1", 00:26:02.096 "aliases": [ 00:26:02.096 "42805719-8425-48f4-813f-d1167f919053" 00:26:02.096 ], 00:26:02.096 "product_name": "NVMe disk", 00:26:02.096 "block_size": 512, 00:26:02.096 "num_blocks": 2097152, 00:26:02.096 "uuid": "42805719-8425-48f4-813f-d1167f919053", 00:26:02.096 "numa_id": 0, 00:26:02.096 "assigned_rate_limits": { 00:26:02.096 "rw_ios_per_sec": 0, 00:26:02.096 "rw_mbytes_per_sec": 0, 00:26:02.096 "r_mbytes_per_sec": 0, 00:26:02.096 "w_mbytes_per_sec": 0 00:26:02.096 }, 00:26:02.096 "claimed": false, 00:26:02.096 "zoned": false, 00:26:02.096 "supported_io_types": { 00:26:02.096 "read": true, 00:26:02.096 "write": true, 00:26:02.096 "unmap": false, 00:26:02.096 "flush": true, 00:26:02.096 "reset": true, 00:26:02.096 "nvme_admin": true, 00:26:02.096 "nvme_io": true, 00:26:02.096 "nvme_io_md": false, 00:26:02.096 "write_zeroes": true, 00:26:02.096 "zcopy": false, 00:26:02.096 "get_zone_info": false, 00:26:02.096 "zone_management": false, 00:26:02.096 "zone_append": false, 00:26:02.096 "compare": true, 00:26:02.096 "compare_and_write": true, 00:26:02.096 "abort": true, 00:26:02.096 "seek_hole": false, 00:26:02.096 "seek_data": false, 00:26:02.096 "copy": true, 00:26:02.096 "nvme_iov_md": false 00:26:02.096 }, 00:26:02.096 "memory_domains": [ 00:26:02.096 { 00:26:02.096 "dma_device_id": "system", 00:26:02.096 "dma_device_type": 1 00:26:02.096 } 00:26:02.096 ], 00:26:02.096 "driver_specific": { 00:26:02.096 "nvme": [ 00:26:02.096 { 00:26:02.096 "trid": { 00:26:02.096 "trtype": "TCP", 00:26:02.096 "adrfam": "IPv4", 00:26:02.096 "traddr": "10.0.0.2", 00:26:02.096 "trsvcid": "4420", 00:26:02.096 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:02.096 }, 00:26:02.096 "ctrlr_data": { 00:26:02.096 "cntlid": 1, 00:26:02.096 "vendor_id": "0x8086", 00:26:02.096 "model_number": "SPDK bdev Controller", 00:26:02.096 "serial_number": "00000000000000000000", 00:26:02.096 "firmware_revision": "25.01", 00:26:02.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:02.096 "oacs": { 00:26:02.096 "security": 0, 00:26:02.096 "format": 0, 00:26:02.096 "firmware": 0, 00:26:02.096 "ns_manage": 0 00:26:02.096 }, 00:26:02.096 "multi_ctrlr": true, 00:26:02.096 "ana_reporting": false 00:26:02.096 }, 00:26:02.096 "vs": { 00:26:02.096 "nvme_version": "1.3" 00:26:02.096 }, 00:26:02.096 "ns_data": { 00:26:02.096 "id": 1, 00:26:02.096 "can_share": true 00:26:02.096 } 00:26:02.096 } 00:26:02.096 ], 00:26:02.096 "mp_policy": "active_passive" 00:26:02.096 } 00:26:02.096 } 00:26:02.096 ] 00:26:02.096 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.096 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:02.096 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.096 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:02.096 [2024-11-19 09:43:48.640575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:02.096 [2024-11-19 09:43:48.640661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x847ce0 (9): Bad file descriptor 00:26:02.096 [2024-11-19 09:43:48.772268] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:26:02.096 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.096 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:02.096 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.096 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:02.096 [ 00:26:02.096 { 00:26:02.096 "name": "nvme0n1", 00:26:02.096 "aliases": [ 00:26:02.096 "42805719-8425-48f4-813f-d1167f919053" 00:26:02.096 ], 00:26:02.096 "product_name": "NVMe disk", 00:26:02.096 "block_size": 512, 00:26:02.096 "num_blocks": 2097152, 00:26:02.096 "uuid": "42805719-8425-48f4-813f-d1167f919053", 00:26:02.096 "numa_id": 0, 00:26:02.096 "assigned_rate_limits": { 00:26:02.096 "rw_ios_per_sec": 0, 00:26:02.096 "rw_mbytes_per_sec": 0, 00:26:02.096 "r_mbytes_per_sec": 0, 00:26:02.096 "w_mbytes_per_sec": 0 00:26:02.096 }, 00:26:02.096 "claimed": false, 00:26:02.096 "zoned": false, 00:26:02.096 "supported_io_types": { 00:26:02.096 "read": true, 00:26:02.096 "write": true, 00:26:02.096 "unmap": false, 00:26:02.096 "flush": true, 00:26:02.096 "reset": true, 00:26:02.096 "nvme_admin": true, 00:26:02.096 "nvme_io": true, 00:26:02.096 "nvme_io_md": false, 00:26:02.096 "write_zeroes": true, 00:26:02.096 "zcopy": false, 00:26:02.096 "get_zone_info": false, 00:26:02.096 "zone_management": false, 00:26:02.096 "zone_append": false, 00:26:02.096 "compare": true, 00:26:02.096 "compare_and_write": true, 00:26:02.096 "abort": true, 00:26:02.096 "seek_hole": false, 00:26:02.096 "seek_data": false, 00:26:02.096 "copy": true, 00:26:02.096 "nvme_iov_md": false 00:26:02.096 }, 00:26:02.096 "memory_domains": [ 00:26:02.096 { 00:26:02.096 "dma_device_id": "system", 00:26:02.096 "dma_device_type": 1 00:26:02.096 } 00:26:02.096 ], 00:26:02.096 "driver_specific": { 00:26:02.096 "nvme": [ 00:26:02.096 { 00:26:02.096 "trid": { 00:26:02.096 "trtype": "TCP", 00:26:02.096 "adrfam": "IPv4", 00:26:02.096 "traddr": "10.0.0.2", 00:26:02.096 "trsvcid": "4420", 00:26:02.096 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:02.096 }, 00:26:02.096 "ctrlr_data": { 00:26:02.096 "cntlid": 2, 00:26:02.096 "vendor_id": "0x8086", 00:26:02.096 "model_number": "SPDK bdev Controller", 00:26:02.096 "serial_number": "00000000000000000000", 00:26:02.096 "firmware_revision": "25.01", 00:26:02.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:02.096 "oacs": { 00:26:02.096 "security": 0, 00:26:02.096 "format": 0, 00:26:02.096 "firmware": 0, 00:26:02.096 "ns_manage": 0 00:26:02.096 }, 00:26:02.096 "multi_ctrlr": true, 00:26:02.096 "ana_reporting": false 00:26:02.096 }, 00:26:02.096 "vs": { 00:26:02.097 "nvme_version": "1.3" 00:26:02.097 }, 00:26:02.097 "ns_data": { 00:26:02.097 "id": 1, 00:26:02.097 "can_share": true 00:26:02.097 } 00:26:02.097 } 00:26:02.097 ], 00:26:02.097 "mp_policy": "active_passive" 00:26:02.097 } 00:26:02.097 } 00:26:02.097 ] 00:26:02.097 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.097 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.097 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.097 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:02.097 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.097 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:02.097 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.w9Eq5B5B0Z 00:26:02.097 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:02.097 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.w9Eq5B5B0Z 00:26:02.097 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.w9Eq5B5B0Z 00:26:02.097 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.097 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:02.358 [2024-11-19 09:43:48.861264] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:02.358 [2024-11-19 09:43:48.861433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:02.358 [2024-11-19 09:43:48.885340] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:02.358 nvme0n1 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.358 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:02.358 [ 00:26:02.358 { 00:26:02.358 "name": "nvme0n1", 00:26:02.358 "aliases": [ 00:26:02.358 "42805719-8425-48f4-813f-d1167f919053" 00:26:02.358 ], 00:26:02.358 "product_name": "NVMe disk", 00:26:02.358 "block_size": 512, 00:26:02.358 "num_blocks": 2097152, 00:26:02.358 "uuid": "42805719-8425-48f4-813f-d1167f919053", 00:26:02.358 "numa_id": 0, 00:26:02.358 "assigned_rate_limits": { 00:26:02.358 "rw_ios_per_sec": 0, 00:26:02.358 "rw_mbytes_per_sec": 0, 00:26:02.358 "r_mbytes_per_sec": 0, 00:26:02.358 "w_mbytes_per_sec": 0 00:26:02.358 }, 00:26:02.358 "claimed": false, 00:26:02.358 "zoned": false, 00:26:02.358 "supported_io_types": { 00:26:02.358 "read": true, 00:26:02.358 "write": true, 00:26:02.358 "unmap": false, 00:26:02.358 "flush": true, 00:26:02.358 "reset": true, 00:26:02.358 "nvme_admin": true, 00:26:02.358 "nvme_io": true, 00:26:02.358 "nvme_io_md": false, 00:26:02.358 "write_zeroes": true, 00:26:02.358 "zcopy": false, 00:26:02.358 "get_zone_info": false, 00:26:02.358 "zone_management": false, 00:26:02.358 "zone_append": false, 00:26:02.358 "compare": true, 00:26:02.358 "compare_and_write": true, 00:26:02.358 "abort": true, 00:26:02.358 "seek_hole": false, 00:26:02.358 "seek_data": false, 00:26:02.358 "copy": true, 00:26:02.358 "nvme_iov_md": false 00:26:02.358 }, 00:26:02.358 "memory_domains": [ 00:26:02.358 { 00:26:02.358 "dma_device_id": "system", 00:26:02.358 "dma_device_type": 1 00:26:02.358 } 00:26:02.358 ], 00:26:02.358 "driver_specific": { 00:26:02.358 "nvme": [ 00:26:02.358 { 00:26:02.358 "trid": { 00:26:02.358 "trtype": "TCP", 00:26:02.358 "adrfam": "IPv4", 00:26:02.358 "traddr": "10.0.0.2", 00:26:02.358 "trsvcid": "4421", 00:26:02.358 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:02.358 }, 00:26:02.358 "ctrlr_data": { 00:26:02.358 "cntlid": 3, 00:26:02.358 "vendor_id": "0x8086", 00:26:02.358 "model_number": "SPDK bdev Controller", 00:26:02.358 "serial_number": "00000000000000000000", 00:26:02.358 "firmware_revision": "25.01", 00:26:02.358 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:02.358 "oacs": { 00:26:02.358 "security": 0, 00:26:02.358 "format": 0, 00:26:02.358 "firmware": 0, 00:26:02.358 "ns_manage": 0 00:26:02.358 }, 00:26:02.358 "multi_ctrlr": true, 00:26:02.358 "ana_reporting": false 00:26:02.358 }, 00:26:02.358 "vs": { 00:26:02.358 "nvme_version": "1.3" 00:26:02.359 }, 00:26:02.359 "ns_data": { 00:26:02.359 "id": 1, 00:26:02.359 "can_share": true 00:26:02.359 } 00:26:02.359 } 00:26:02.359 ], 00:26:02.359 "mp_policy": "active_passive" 00:26:02.359 } 00:26:02.359 } 00:26:02.359 ] 00:26:02.359 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.359 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.359 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.359 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:02.359 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.359 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.w9Eq5B5B0Z 00:26:02.359 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:26:02.359 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:26:02.359 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:02.359 09:43:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:26:02.359 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:02.359 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:26:02.359 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:02.359 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:02.359 rmmod nvme_tcp 00:26:02.359 rmmod nvme_fabrics 00:26:02.359 rmmod nvme_keyring 00:26:02.359 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:02.359 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:26:02.359 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:26:02.359 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 427456 ']' 00:26:02.359 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 427456 00:26:02.359 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 427456 ']' 00:26:02.359 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 427456 00:26:02.359 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:26:02.359 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:02.359 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 427456 00:26:02.626 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:02.626 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:02.626 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 427456' 00:26:02.626 killing process with pid 427456 00:26:02.626 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 427456 00:26:02.626 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 427456 00:26:02.626 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:02.626 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:02.626 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:02.626 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:26:02.626 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:26:02.626 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:02.626 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:26:02.626 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:02.626 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:02.626 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.626 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.626 09:43:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.175 09:43:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:05.175 00:26:05.175 real 0m11.720s 00:26:05.175 user 0m4.204s 00:26:05.175 sys 0m6.086s 00:26:05.175 09:43:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:05.175 09:43:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:05.175 ************************************ 00:26:05.175 END TEST nvmf_async_init 00:26:05.175 ************************************ 00:26:05.175 09:43:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:05.175 09:43:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:05.175 09:43:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:05.175 09:43:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.175 ************************************ 00:26:05.175 START TEST dma 00:26:05.175 ************************************ 00:26:05.175 09:43:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:05.175 * Looking for test storage... 00:26:05.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:05.175 09:43:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:05.175 09:43:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:26:05.175 09:43:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:05.175 09:43:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:05.175 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:05.175 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:05.175 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:05.175 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:05.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.176 --rc genhtml_branch_coverage=1 00:26:05.176 --rc genhtml_function_coverage=1 00:26:05.176 --rc genhtml_legend=1 00:26:05.176 --rc geninfo_all_blocks=1 00:26:05.176 --rc geninfo_unexecuted_blocks=1 00:26:05.176 00:26:05.176 ' 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:05.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.176 --rc genhtml_branch_coverage=1 00:26:05.176 --rc genhtml_function_coverage=1 00:26:05.176 --rc genhtml_legend=1 00:26:05.176 --rc geninfo_all_blocks=1 00:26:05.176 --rc geninfo_unexecuted_blocks=1 00:26:05.176 00:26:05.176 ' 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:05.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.176 --rc genhtml_branch_coverage=1 00:26:05.176 --rc genhtml_function_coverage=1 00:26:05.176 --rc genhtml_legend=1 00:26:05.176 --rc geninfo_all_blocks=1 00:26:05.176 --rc geninfo_unexecuted_blocks=1 00:26:05.176 00:26:05.176 ' 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:05.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.176 --rc genhtml_branch_coverage=1 00:26:05.176 --rc genhtml_function_coverage=1 00:26:05.176 --rc genhtml_legend=1 00:26:05.176 --rc geninfo_all_blocks=1 00:26:05.176 --rc geninfo_unexecuted_blocks=1 00:26:05.176 00:26:05.176 ' 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:05.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:26:05.176 00:26:05.176 real 0m0.240s 00:26:05.176 user 0m0.142s 00:26:05.176 sys 0m0.112s 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:05.176 ************************************ 00:26:05.176 END TEST dma 00:26:05.176 ************************************ 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.176 ************************************ 00:26:05.176 START TEST nvmf_identify 00:26:05.176 ************************************ 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:05.176 * Looking for test storage... 00:26:05.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:26:05.176 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:05.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.439 --rc genhtml_branch_coverage=1 00:26:05.439 --rc genhtml_function_coverage=1 00:26:05.439 --rc genhtml_legend=1 00:26:05.439 --rc geninfo_all_blocks=1 00:26:05.439 --rc geninfo_unexecuted_blocks=1 00:26:05.439 00:26:05.439 ' 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:05.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.439 --rc genhtml_branch_coverage=1 00:26:05.439 --rc genhtml_function_coverage=1 00:26:05.439 --rc genhtml_legend=1 00:26:05.439 --rc geninfo_all_blocks=1 00:26:05.439 --rc geninfo_unexecuted_blocks=1 00:26:05.439 00:26:05.439 ' 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:05.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.439 --rc genhtml_branch_coverage=1 00:26:05.439 --rc genhtml_function_coverage=1 00:26:05.439 --rc genhtml_legend=1 00:26:05.439 --rc geninfo_all_blocks=1 00:26:05.439 --rc geninfo_unexecuted_blocks=1 00:26:05.439 00:26:05.439 ' 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:05.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.439 --rc genhtml_branch_coverage=1 00:26:05.439 --rc genhtml_function_coverage=1 00:26:05.439 --rc genhtml_legend=1 00:26:05.439 --rc geninfo_all_blocks=1 00:26:05.439 --rc geninfo_unexecuted_blocks=1 00:26:05.439 00:26:05.439 ' 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.439 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.440 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.440 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.440 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.440 09:43:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:05.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:26:05.440 09:43:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:13.588 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:13.588 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.588 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:13.589 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:13.589 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:13.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:26:13.589 00:26:13.589 --- 10.0.0.2 ping statistics --- 00:26:13.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.589 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:26:13.589 00:26:13.589 --- 10.0.0.1 ping statistics --- 00:26:13.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.589 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=431994 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 431994 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 431994 ']' 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.589 09:43:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:13.589 [2024-11-19 09:43:59.546290] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:13.589 [2024-11-19 09:43:59.546357] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.589 [2024-11-19 09:43:59.646481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:13.589 [2024-11-19 09:43:59.700980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.589 [2024-11-19 09:43:59.701037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.589 [2024-11-19 09:43:59.701047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.589 [2024-11-19 09:43:59.701055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.589 [2024-11-19 09:43:59.701062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.589 [2024-11-19 09:43:59.703382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.589 [2024-11-19 09:43:59.703539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.589 [2024-11-19 09:43:59.703699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.589 [2024-11-19 09:43:59.703700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:13.850 [2024-11-19 09:44:00.383408] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:13.850 Malloc0 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:13.850 [2024-11-19 09:44:00.501524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:13.850 [ 00:26:13.850 { 00:26:13.850 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:13.850 "subtype": "Discovery", 00:26:13.850 "listen_addresses": [ 00:26:13.850 { 00:26:13.850 "trtype": "TCP", 00:26:13.850 "adrfam": "IPv4", 00:26:13.850 "traddr": "10.0.0.2", 00:26:13.850 "trsvcid": "4420" 00:26:13.850 } 00:26:13.850 ], 00:26:13.850 "allow_any_host": true, 00:26:13.850 "hosts": [] 00:26:13.850 }, 00:26:13.850 { 00:26:13.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.850 "subtype": "NVMe", 00:26:13.850 "listen_addresses": [ 00:26:13.850 { 00:26:13.850 "trtype": "TCP", 00:26:13.850 "adrfam": "IPv4", 00:26:13.850 "traddr": "10.0.0.2", 00:26:13.850 "trsvcid": "4420" 00:26:13.850 } 00:26:13.850 ], 00:26:13.850 "allow_any_host": true, 00:26:13.850 "hosts": [], 00:26:13.850 "serial_number": "SPDK00000000000001", 00:26:13.850 "model_number": "SPDK bdev Controller", 00:26:13.850 "max_namespaces": 32, 00:26:13.850 "min_cntlid": 1, 00:26:13.850 "max_cntlid": 65519, 00:26:13.850 "namespaces": [ 00:26:13.850 { 00:26:13.850 "nsid": 1, 00:26:13.850 "bdev_name": "Malloc0", 00:26:13.850 "name": "Malloc0", 00:26:13.850 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:13.850 "eui64": "ABCDEF0123456789", 00:26:13.850 "uuid": "4ad578ca-aceb-4758-968e-44dd0444b2e5" 00:26:13.850 } 00:26:13.850 ] 00:26:13.850 } 00:26:13.850 ] 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.850 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:13.850 [2024-11-19 09:44:00.567100] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:13.850 [2024-11-19 09:44:00.567190] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432225 ] 00:26:14.114 [2024-11-19 09:44:00.636767] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:26:14.114 [2024-11-19 09:44:00.636843] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:14.114 [2024-11-19 09:44:00.636850] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:14.114 [2024-11-19 09:44:00.636866] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:14.114 [2024-11-19 09:44:00.636879] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:14.114 [2024-11-19 09:44:00.637796] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:26:14.114 [2024-11-19 09:44:00.637842] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x199d690 0 00:26:14.114 [2024-11-19 09:44:00.648180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:14.114 [2024-11-19 09:44:00.648196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:14.114 [2024-11-19 09:44:00.648202] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:14.114 [2024-11-19 09:44:00.648206] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:14.114 [2024-11-19 09:44:00.648250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.114 [2024-11-19 09:44:00.648257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.114 [2024-11-19 09:44:00.648262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x199d690) 00:26:14.114 [2024-11-19 09:44:00.648279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:14.114 [2024-11-19 09:44:00.648306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff100, cid 0, qid 0 00:26:14.114 [2024-11-19 09:44:00.656176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.114 [2024-11-19 09:44:00.656187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.114 [2024-11-19 09:44:00.656192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.114 [2024-11-19 09:44:00.656203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff100) on tqpair=0x199d690 00:26:14.114 [2024-11-19 09:44:00.656218] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:14.114 [2024-11-19 09:44:00.656227] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:26:14.114 [2024-11-19 09:44:00.656233] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:26:14.114 [2024-11-19 09:44:00.656250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.114 [2024-11-19 09:44:00.656254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.114 [2024-11-19 09:44:00.656259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x199d690) 00:26:14.114 [2024-11-19 09:44:00.656267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.114 [2024-11-19 09:44:00.656284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff100, cid 0, qid 0 00:26:14.114 [2024-11-19 09:44:00.656474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.114 [2024-11-19 09:44:00.656482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.114 [2024-11-19 09:44:00.656486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.114 [2024-11-19 09:44:00.656490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff100) on tqpair=0x199d690 00:26:14.114 [2024-11-19 09:44:00.656496] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:26:14.114 [2024-11-19 09:44:00.656504] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:26:14.114 [2024-11-19 09:44:00.656512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.114 [2024-11-19 09:44:00.656517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.114 [2024-11-19 09:44:00.656522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x199d690) 00:26:14.114 [2024-11-19 09:44:00.656529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.114 [2024-11-19 09:44:00.656540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff100, cid 0, qid 0 00:26:14.114 [2024-11-19 09:44:00.656725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.114 [2024-11-19 09:44:00.656732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.114 [2024-11-19 09:44:00.656736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.114 [2024-11-19 09:44:00.656741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff100) on tqpair=0x199d690 00:26:14.114 [2024-11-19 09:44:00.656747] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:26:14.114 [2024-11-19 09:44:00.656756] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:14.114 [2024-11-19 09:44:00.656763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.114 [2024-11-19 09:44:00.656767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.114 [2024-11-19 09:44:00.656771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x199d690) 00:26:14.114 [2024-11-19 09:44:00.656778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.114 [2024-11-19 09:44:00.656790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff100, cid 0, qid 0 00:26:14.114 [2024-11-19 09:44:00.657027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.114 [2024-11-19 09:44:00.657035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.114 [2024-11-19 09:44:00.657039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.657047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff100) on tqpair=0x199d690 00:26:14.115 [2024-11-19 09:44:00.657053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:14.115 [2024-11-19 09:44:00.657063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.657068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.657071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x199d690) 00:26:14.115 [2024-11-19 09:44:00.657079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.115 [2024-11-19 09:44:00.657089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff100, cid 0, qid 0 00:26:14.115 [2024-11-19 09:44:00.657282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.115 [2024-11-19 09:44:00.657290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.115 [2024-11-19 09:44:00.657293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.657298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff100) on tqpair=0x199d690 00:26:14.115 [2024-11-19 09:44:00.657303] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:14.115 [2024-11-19 09:44:00.657309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:14.115 [2024-11-19 09:44:00.657317] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:14.115 [2024-11-19 09:44:00.657430] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:26:14.115 [2024-11-19 09:44:00.657436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:14.115 [2024-11-19 09:44:00.657445] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.657450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.657454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x199d690) 00:26:14.115 [2024-11-19 09:44:00.657461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.115 [2024-11-19 09:44:00.657474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff100, cid 0, qid 0 00:26:14.115 [2024-11-19 09:44:00.657664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.115 [2024-11-19 09:44:00.657672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.115 [2024-11-19 09:44:00.657675] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.657679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff100) on tqpair=0x199d690 00:26:14.115 [2024-11-19 09:44:00.657684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:14.115 [2024-11-19 09:44:00.657698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.657702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.657706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x199d690) 00:26:14.115 [2024-11-19 09:44:00.657713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.115 [2024-11-19 09:44:00.657724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff100, cid 0, qid 0 00:26:14.115 [2024-11-19 09:44:00.657945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.115 [2024-11-19 09:44:00.657955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.115 [2024-11-19 09:44:00.657961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.657967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff100) on tqpair=0x199d690 00:26:14.115 [2024-11-19 09:44:00.657972] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:14.115 [2024-11-19 09:44:00.657979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:14.115 [2024-11-19 09:44:00.657987] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:26:14.115 [2024-11-19 09:44:00.657996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:14.115 [2024-11-19 09:44:00.658007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.658011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x199d690) 00:26:14.115 [2024-11-19 09:44:00.658018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.115 [2024-11-19 09:44:00.658029] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff100, cid 0, qid 0 00:26:14.115 [2024-11-19 09:44:00.658345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:14.115 [2024-11-19 09:44:00.658352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:14.115 [2024-11-19 09:44:00.658356] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.658361] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x199d690): datao=0, datal=4096, cccid=0 00:26:14.115 [2024-11-19 09:44:00.658366] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19ff100) on tqpair(0x199d690): expected_datao=0, payload_size=4096 00:26:14.115 [2024-11-19 09:44:00.658371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.658380] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.658385] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.658553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.115 [2024-11-19 09:44:00.658559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.115 [2024-11-19 09:44:00.658563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.658567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff100) on tqpair=0x199d690 00:26:14.115 [2024-11-19 09:44:00.658576] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:26:14.115 [2024-11-19 09:44:00.658582] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:26:14.115 [2024-11-19 09:44:00.658586] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:26:14.115 [2024-11-19 09:44:00.658595] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:26:14.115 [2024-11-19 09:44:00.658601] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:26:14.115 [2024-11-19 09:44:00.658606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:26:14.115 [2024-11-19 09:44:00.658619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:14.115 [2024-11-19 09:44:00.658626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.658631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.658637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x199d690) 00:26:14.115 [2024-11-19 09:44:00.658646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:14.115 [2024-11-19 09:44:00.658658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff100, cid 0, qid 0 00:26:14.115 [2024-11-19 09:44:00.658904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.115 [2024-11-19 09:44:00.658912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.115 [2024-11-19 09:44:00.658916] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.658920] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff100) on tqpair=0x199d690 00:26:14.115 [2024-11-19 09:44:00.658928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.658933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.658936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x199d690) 00:26:14.115 [2024-11-19 09:44:00.658943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.115 [2024-11-19 09:44:00.658950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.658954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.658957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x199d690) 00:26:14.115 [2024-11-19 09:44:00.658963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.115 [2024-11-19 09:44:00.658970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.658974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.658978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x199d690) 00:26:14.115 [2024-11-19 09:44:00.658984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.115 [2024-11-19 09:44:00.658990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.658994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.658998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x199d690) 00:26:14.115 [2024-11-19 09:44:00.659004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.115 [2024-11-19 09:44:00.659010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:14.115 [2024-11-19 09:44:00.659019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:14.115 [2024-11-19 09:44:00.659026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.115 [2024-11-19 09:44:00.659030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x199d690) 00:26:14.115 [2024-11-19 09:44:00.659037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.115 [2024-11-19 09:44:00.659049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff100, cid 0, qid 0 00:26:14.115 [2024-11-19 09:44:00.659054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff280, cid 1, qid 0 00:26:14.115 [2024-11-19 09:44:00.659060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff400, cid 2, qid 0 00:26:14.116 [2024-11-19 09:44:00.659065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff580, cid 3, qid 0 00:26:14.116 [2024-11-19 09:44:00.659070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff700, cid 4, qid 0 00:26:14.116 [2024-11-19 09:44:00.659342] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.116 [2024-11-19 09:44:00.659349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.116 [2024-11-19 09:44:00.659353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.659357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff700) on tqpair=0x199d690 00:26:14.116 [2024-11-19 09:44:00.659365] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:26:14.116 [2024-11-19 09:44:00.659371] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:26:14.116 [2024-11-19 09:44:00.659382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.659386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x199d690) 00:26:14.116 [2024-11-19 09:44:00.659394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.116 [2024-11-19 09:44:00.659405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff700, cid 4, qid 0 00:26:14.116 [2024-11-19 09:44:00.659633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:14.116 [2024-11-19 09:44:00.659639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:14.116 [2024-11-19 09:44:00.659643] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.659647] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x199d690): datao=0, datal=4096, cccid=4 00:26:14.116 [2024-11-19 09:44:00.659652] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19ff700) on tqpair(0x199d690): expected_datao=0, payload_size=4096 00:26:14.116 [2024-11-19 09:44:00.659657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.659684] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.659689] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.659904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.116 [2024-11-19 09:44:00.659910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.116 [2024-11-19 09:44:00.659914] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.659918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff700) on tqpair=0x199d690 00:26:14.116 [2024-11-19 09:44:00.659932] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:26:14.116 [2024-11-19 09:44:00.659959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.659964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x199d690) 00:26:14.116 [2024-11-19 09:44:00.659971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.116 [2024-11-19 09:44:00.659978] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.659982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.659987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x199d690) 00:26:14.116 [2024-11-19 09:44:00.659993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.116 [2024-11-19 09:44:00.660008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff700, cid 4, qid 0 00:26:14.116 [2024-11-19 09:44:00.660014] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff880, cid 5, qid 0 00:26:14.116 [2024-11-19 09:44:00.664176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:14.116 [2024-11-19 09:44:00.664185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:14.116 [2024-11-19 09:44:00.664190] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.664197] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x199d690): datao=0, datal=1024, cccid=4 00:26:14.116 [2024-11-19 09:44:00.664203] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19ff700) on tqpair(0x199d690): expected_datao=0, payload_size=1024 00:26:14.116 [2024-11-19 09:44:00.664207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.664215] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.664220] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.664226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.116 [2024-11-19 09:44:00.664232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.116 [2024-11-19 09:44:00.664237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.664240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff880) on tqpair=0x199d690 00:26:14.116 [2024-11-19 09:44:00.704169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.116 [2024-11-19 09:44:00.704181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.116 [2024-11-19 09:44:00.704185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.704189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff700) on tqpair=0x199d690 00:26:14.116 [2024-11-19 09:44:00.704205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.704209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x199d690) 00:26:14.116 [2024-11-19 09:44:00.704218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.116 [2024-11-19 09:44:00.704237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff700, cid 4, qid 0 00:26:14.116 [2024-11-19 09:44:00.704463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:14.116 [2024-11-19 09:44:00.704470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:14.116 [2024-11-19 09:44:00.704474] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.704477] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x199d690): datao=0, datal=3072, cccid=4 00:26:14.116 [2024-11-19 09:44:00.704482] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19ff700) on tqpair(0x199d690): expected_datao=0, payload_size=3072 00:26:14.116 [2024-11-19 09:44:00.704487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.704494] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.704498] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.704689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.116 [2024-11-19 09:44:00.704695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.116 [2024-11-19 09:44:00.704699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.704703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff700) on tqpair=0x199d690 00:26:14.116 [2024-11-19 09:44:00.704712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.704715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x199d690) 00:26:14.116 [2024-11-19 09:44:00.704722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.116 [2024-11-19 09:44:00.704736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff700, cid 4, qid 0 00:26:14.116 [2024-11-19 09:44:00.704989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:14.116 [2024-11-19 09:44:00.704996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:14.116 [2024-11-19 09:44:00.705001] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.705006] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x199d690): datao=0, datal=8, cccid=4 00:26:14.116 [2024-11-19 09:44:00.705015] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19ff700) on tqpair(0x199d690): expected_datao=0, payload_size=8 00:26:14.116 [2024-11-19 09:44:00.705020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.705026] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.705030] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.749168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.116 [2024-11-19 09:44:00.749180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.116 [2024-11-19 09:44:00.749184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.116 [2024-11-19 09:44:00.749188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff700) on tqpair=0x199d690 00:26:14.116 ===================================================== 00:26:14.116 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:14.116 ===================================================== 00:26:14.116 Controller Capabilities/Features 00:26:14.116 ================================ 00:26:14.116 Vendor ID: 0000 00:26:14.116 Subsystem Vendor ID: 0000 00:26:14.116 Serial Number: .................... 00:26:14.116 Model Number: ........................................ 00:26:14.116 Firmware Version: 25.01 00:26:14.116 Recommended Arb Burst: 0 00:26:14.116 IEEE OUI Identifier: 00 00 00 00:26:14.116 Multi-path I/O 00:26:14.116 May have multiple subsystem ports: No 00:26:14.116 May have multiple controllers: No 00:26:14.116 Associated with SR-IOV VF: No 00:26:14.116 Max Data Transfer Size: 131072 00:26:14.116 Max Number of Namespaces: 0 00:26:14.116 Max Number of I/O Queues: 1024 00:26:14.116 NVMe Specification Version (VS): 1.3 00:26:14.116 NVMe Specification Version (Identify): 1.3 00:26:14.116 Maximum Queue Entries: 128 00:26:14.116 Contiguous Queues Required: Yes 00:26:14.116 Arbitration Mechanisms Supported 00:26:14.116 Weighted Round Robin: Not Supported 00:26:14.116 Vendor Specific: Not Supported 00:26:14.116 Reset Timeout: 15000 ms 00:26:14.116 Doorbell Stride: 4 bytes 00:26:14.116 NVM Subsystem Reset: Not Supported 00:26:14.116 Command Sets Supported 00:26:14.116 NVM Command Set: Supported 00:26:14.116 Boot Partition: Not Supported 00:26:14.116 Memory Page Size Minimum: 4096 bytes 00:26:14.116 Memory Page Size Maximum: 4096 bytes 00:26:14.116 Persistent Memory Region: Not Supported 00:26:14.116 Optional Asynchronous Events Supported 00:26:14.116 Namespace Attribute Notices: Not Supported 00:26:14.116 Firmware Activation Notices: Not Supported 00:26:14.116 ANA Change Notices: Not Supported 00:26:14.117 PLE Aggregate Log Change Notices: Not Supported 00:26:14.117 LBA Status Info Alert Notices: Not Supported 00:26:14.117 EGE Aggregate Log Change Notices: Not Supported 00:26:14.117 Normal NVM Subsystem Shutdown event: Not Supported 00:26:14.117 Zone Descriptor Change Notices: Not Supported 00:26:14.117 Discovery Log Change Notices: Supported 00:26:14.117 Controller Attributes 00:26:14.117 128-bit Host Identifier: Not Supported 00:26:14.117 Non-Operational Permissive Mode: Not Supported 00:26:14.117 NVM Sets: Not Supported 00:26:14.117 Read Recovery Levels: Not Supported 00:26:14.117 Endurance Groups: Not Supported 00:26:14.117 Predictable Latency Mode: Not Supported 00:26:14.117 Traffic Based Keep ALive: Not Supported 00:26:14.117 Namespace Granularity: Not Supported 00:26:14.117 SQ Associations: Not Supported 00:26:14.117 UUID List: Not Supported 00:26:14.117 Multi-Domain Subsystem: Not Supported 00:26:14.117 Fixed Capacity Management: Not Supported 00:26:14.117 Variable Capacity Management: Not Supported 00:26:14.117 Delete Endurance Group: Not Supported 00:26:14.117 Delete NVM Set: Not Supported 00:26:14.117 Extended LBA Formats Supported: Not Supported 00:26:14.117 Flexible Data Placement Supported: Not Supported 00:26:14.117 00:26:14.117 Controller Memory Buffer Support 00:26:14.117 ================================ 00:26:14.117 Supported: No 00:26:14.117 00:26:14.117 Persistent Memory Region Support 00:26:14.117 ================================ 00:26:14.117 Supported: No 00:26:14.117 00:26:14.117 Admin Command Set Attributes 00:26:14.117 ============================ 00:26:14.117 Security Send/Receive: Not Supported 00:26:14.117 Format NVM: Not Supported 00:26:14.117 Firmware Activate/Download: Not Supported 00:26:14.117 Namespace Management: Not Supported 00:26:14.117 Device Self-Test: Not Supported 00:26:14.117 Directives: Not Supported 00:26:14.117 NVMe-MI: Not Supported 00:26:14.117 Virtualization Management: Not Supported 00:26:14.117 Doorbell Buffer Config: Not Supported 00:26:14.117 Get LBA Status Capability: Not Supported 00:26:14.117 Command & Feature Lockdown Capability: Not Supported 00:26:14.117 Abort Command Limit: 1 00:26:14.117 Async Event Request Limit: 4 00:26:14.117 Number of Firmware Slots: N/A 00:26:14.117 Firmware Slot 1 Read-Only: N/A 00:26:14.117 Firmware Activation Without Reset: N/A 00:26:14.117 Multiple Update Detection Support: N/A 00:26:14.117 Firmware Update Granularity: No Information Provided 00:26:14.117 Per-Namespace SMART Log: No 00:26:14.117 Asymmetric Namespace Access Log Page: Not Supported 00:26:14.117 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:14.117 Command Effects Log Page: Not Supported 00:26:14.117 Get Log Page Extended Data: Supported 00:26:14.117 Telemetry Log Pages: Not Supported 00:26:14.117 Persistent Event Log Pages: Not Supported 00:26:14.117 Supported Log Pages Log Page: May Support 00:26:14.117 Commands Supported & Effects Log Page: Not Supported 00:26:14.117 Feature Identifiers & Effects Log Page:May Support 00:26:14.117 NVMe-MI Commands & Effects Log Page: May Support 00:26:14.117 Data Area 4 for Telemetry Log: Not Supported 00:26:14.117 Error Log Page Entries Supported: 128 00:26:14.117 Keep Alive: Not Supported 00:26:14.117 00:26:14.117 NVM Command Set Attributes 00:26:14.117 ========================== 00:26:14.117 Submission Queue Entry Size 00:26:14.117 Max: 1 00:26:14.117 Min: 1 00:26:14.117 Completion Queue Entry Size 00:26:14.117 Max: 1 00:26:14.117 Min: 1 00:26:14.117 Number of Namespaces: 0 00:26:14.117 Compare Command: Not Supported 00:26:14.117 Write Uncorrectable Command: Not Supported 00:26:14.117 Dataset Management Command: Not Supported 00:26:14.117 Write Zeroes Command: Not Supported 00:26:14.117 Set Features Save Field: Not Supported 00:26:14.117 Reservations: Not Supported 00:26:14.117 Timestamp: Not Supported 00:26:14.117 Copy: Not Supported 00:26:14.117 Volatile Write Cache: Not Present 00:26:14.117 Atomic Write Unit (Normal): 1 00:26:14.117 Atomic Write Unit (PFail): 1 00:26:14.117 Atomic Compare & Write Unit: 1 00:26:14.117 Fused Compare & Write: Supported 00:26:14.117 Scatter-Gather List 00:26:14.117 SGL Command Set: Supported 00:26:14.117 SGL Keyed: Supported 00:26:14.117 SGL Bit Bucket Descriptor: Not Supported 00:26:14.117 SGL Metadata Pointer: Not Supported 00:26:14.117 Oversized SGL: Not Supported 00:26:14.117 SGL Metadata Address: Not Supported 00:26:14.117 SGL Offset: Supported 00:26:14.117 Transport SGL Data Block: Not Supported 00:26:14.117 Replay Protected Memory Block: Not Supported 00:26:14.117 00:26:14.117 Firmware Slot Information 00:26:14.117 ========================= 00:26:14.117 Active slot: 0 00:26:14.117 00:26:14.117 00:26:14.117 Error Log 00:26:14.117 ========= 00:26:14.117 00:26:14.117 Active Namespaces 00:26:14.117 ================= 00:26:14.117 Discovery Log Page 00:26:14.117 ================== 00:26:14.117 Generation Counter: 2 00:26:14.117 Number of Records: 2 00:26:14.117 Record Format: 0 00:26:14.117 00:26:14.117 Discovery Log Entry 0 00:26:14.117 ---------------------- 00:26:14.117 Transport Type: 3 (TCP) 00:26:14.117 Address Family: 1 (IPv4) 00:26:14.117 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:14.117 Entry Flags: 00:26:14.117 Duplicate Returned Information: 1 00:26:14.117 Explicit Persistent Connection Support for Discovery: 1 00:26:14.117 Transport Requirements: 00:26:14.117 Secure Channel: Not Required 00:26:14.117 Port ID: 0 (0x0000) 00:26:14.117 Controller ID: 65535 (0xffff) 00:26:14.117 Admin Max SQ Size: 128 00:26:14.117 Transport Service Identifier: 4420 00:26:14.117 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:14.117 Transport Address: 10.0.0.2 00:26:14.117 Discovery Log Entry 1 00:26:14.117 ---------------------- 00:26:14.117 Transport Type: 3 (TCP) 00:26:14.117 Address Family: 1 (IPv4) 00:26:14.117 Subsystem Type: 2 (NVM Subsystem) 00:26:14.117 Entry Flags: 00:26:14.117 Duplicate Returned Information: 0 00:26:14.117 Explicit Persistent Connection Support for Discovery: 0 00:26:14.117 Transport Requirements: 00:26:14.117 Secure Channel: Not Required 00:26:14.117 Port ID: 0 (0x0000) 00:26:14.117 Controller ID: 65535 (0xffff) 00:26:14.117 Admin Max SQ Size: 128 00:26:14.117 Transport Service Identifier: 4420 00:26:14.117 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:14.117 Transport Address: 10.0.0.2 [2024-11-19 09:44:00.749298] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:26:14.117 [2024-11-19 09:44:00.749310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff100) on tqpair=0x199d690 00:26:14.117 [2024-11-19 09:44:00.749318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.117 [2024-11-19 09:44:00.749324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff280) on tqpair=0x199d690 00:26:14.117 [2024-11-19 09:44:00.749329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.117 [2024-11-19 09:44:00.749334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff400) on tqpair=0x199d690 00:26:14.117 [2024-11-19 09:44:00.749339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.117 [2024-11-19 09:44:00.749344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff580) on tqpair=0x199d690 00:26:14.117 [2024-11-19 09:44:00.749349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.117 [2024-11-19 09:44:00.749361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.117 [2024-11-19 09:44:00.749365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.117 [2024-11-19 09:44:00.749369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x199d690) 00:26:14.117 [2024-11-19 09:44:00.749377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.117 [2024-11-19 09:44:00.749392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff580, cid 3, qid 0 00:26:14.117 [2024-11-19 09:44:00.749618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.117 [2024-11-19 09:44:00.749625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.117 [2024-11-19 09:44:00.749628] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.117 [2024-11-19 09:44:00.749632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff580) on tqpair=0x199d690 00:26:14.117 [2024-11-19 09:44:00.749640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.117 [2024-11-19 09:44:00.749644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.117 [2024-11-19 09:44:00.749647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x199d690) 00:26:14.117 [2024-11-19 09:44:00.749654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.118 [2024-11-19 09:44:00.749667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff580, cid 3, qid 0 00:26:14.118 [2024-11-19 09:44:00.749909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.118 [2024-11-19 09:44:00.749915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.118 [2024-11-19 09:44:00.749919] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.749925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff580) on tqpair=0x199d690 00:26:14.118 [2024-11-19 09:44:00.749930] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:26:14.118 [2024-11-19 09:44:00.749935] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:26:14.118 [2024-11-19 09:44:00.749945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.749949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.749953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x199d690) 00:26:14.118 [2024-11-19 09:44:00.749960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.118 [2024-11-19 09:44:00.749970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff580, cid 3, qid 0 00:26:14.118 [2024-11-19 09:44:00.750212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.118 [2024-11-19 09:44:00.750219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.118 [2024-11-19 09:44:00.750222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.750226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff580) on tqpair=0x199d690 00:26:14.118 [2024-11-19 09:44:00.750237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.750241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.750244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x199d690) 00:26:14.118 [2024-11-19 09:44:00.750251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.118 [2024-11-19 09:44:00.750262] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff580, cid 3, qid 0 00:26:14.118 [2024-11-19 09:44:00.750464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.118 [2024-11-19 09:44:00.750470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.118 [2024-11-19 09:44:00.750473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.750477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff580) on tqpair=0x199d690 00:26:14.118 [2024-11-19 09:44:00.750487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.750491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.750494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x199d690) 00:26:14.118 [2024-11-19 09:44:00.750501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.118 [2024-11-19 09:44:00.750511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff580, cid 3, qid 0 00:26:14.118 [2024-11-19 09:44:00.750744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.118 [2024-11-19 09:44:00.750750] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.118 [2024-11-19 09:44:00.750753] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.750757] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff580) on tqpair=0x199d690 00:26:14.118 [2024-11-19 09:44:00.750768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.750772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.750775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x199d690) 00:26:14.118 [2024-11-19 09:44:00.750782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.118 [2024-11-19 09:44:00.750793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff580, cid 3, qid 0 00:26:14.118 [2024-11-19 09:44:00.751018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.118 [2024-11-19 09:44:00.751027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.118 [2024-11-19 09:44:00.751030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.751034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff580) on tqpair=0x199d690 00:26:14.118 [2024-11-19 09:44:00.751045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.751049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.751052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x199d690) 00:26:14.118 [2024-11-19 09:44:00.751059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.118 [2024-11-19 09:44:00.751069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff580, cid 3, qid 0 00:26:14.118 [2024-11-19 09:44:00.751321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.118 [2024-11-19 09:44:00.751327] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.118 [2024-11-19 09:44:00.751331] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.751335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff580) on tqpair=0x199d690 00:26:14.118 [2024-11-19 09:44:00.751344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.751349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.751352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x199d690) 00:26:14.118 [2024-11-19 09:44:00.751359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.118 [2024-11-19 09:44:00.751369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff580, cid 3, qid 0 00:26:14.118 [2024-11-19 09:44:00.751624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.118 [2024-11-19 09:44:00.751630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.118 [2024-11-19 09:44:00.751633] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.751637] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff580) on tqpair=0x199d690 00:26:14.118 [2024-11-19 09:44:00.751647] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.751651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.751655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x199d690) 00:26:14.118 [2024-11-19 09:44:00.751662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.118 [2024-11-19 09:44:00.751672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff580, cid 3, qid 0 00:26:14.118 [2024-11-19 09:44:00.751876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.118 [2024-11-19 09:44:00.751882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.118 [2024-11-19 09:44:00.751885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.751889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff580) on tqpair=0x199d690 00:26:14.118 [2024-11-19 09:44:00.751899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.751903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.751906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x199d690) 00:26:14.118 [2024-11-19 09:44:00.751913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.118 [2024-11-19 09:44:00.751923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff580, cid 3, qid 0 00:26:14.118 [2024-11-19 09:44:00.752126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.118 [2024-11-19 09:44:00.752132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.118 [2024-11-19 09:44:00.752137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.752141] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff580) on tqpair=0x199d690 00:26:14.118 [2024-11-19 09:44:00.752151] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.752155] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.752164] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x199d690) 00:26:14.118 [2024-11-19 09:44:00.752171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.118 [2024-11-19 09:44:00.752181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff580, cid 3, qid 0 00:26:14.118 [2024-11-19 09:44:00.752381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.118 [2024-11-19 09:44:00.752387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.118 [2024-11-19 09:44:00.752391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.752394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff580) on tqpair=0x199d690 00:26:14.118 [2024-11-19 09:44:00.752404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.752408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.752412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x199d690) 00:26:14.118 [2024-11-19 09:44:00.752418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.118 [2024-11-19 09:44:00.752429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff580, cid 3, qid 0 00:26:14.118 [2024-11-19 09:44:00.752630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.118 [2024-11-19 09:44:00.752636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.118 [2024-11-19 09:44:00.752640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.752643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff580) on tqpair=0x199d690 00:26:14.118 [2024-11-19 09:44:00.752654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.752658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.752661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x199d690) 00:26:14.118 [2024-11-19 09:44:00.752668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.118 [2024-11-19 09:44:00.752679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff580, cid 3, qid 0 00:26:14.118 [2024-11-19 09:44:00.752885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.118 [2024-11-19 09:44:00.752891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.118 [2024-11-19 09:44:00.752895] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.118 [2024-11-19 09:44:00.752899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff580) on tqpair=0x199d690 00:26:14.119 [2024-11-19 09:44:00.752908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.119 [2024-11-19 09:44:00.752912] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.119 [2024-11-19 09:44:00.752916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x199d690) 00:26:14.119 [2024-11-19 09:44:00.752922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.119 [2024-11-19 09:44:00.752933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff580, cid 3, qid 0 00:26:14.119 [2024-11-19 09:44:00.753136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.119 [2024-11-19 09:44:00.753142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.119 [2024-11-19 09:44:00.753146] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.119 [2024-11-19 09:44:00.753151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff580) on tqpair=0x199d690 00:26:14.119 [2024-11-19 09:44:00.757169] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.119 [2024-11-19 09:44:00.757175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.119 [2024-11-19 09:44:00.757179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x199d690) 00:26:14.119 [2024-11-19 09:44:00.757186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.119 [2024-11-19 09:44:00.757197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ff580, cid 3, qid 0 00:26:14.119 [2024-11-19 09:44:00.757415] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.119 [2024-11-19 09:44:00.757421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.119 [2024-11-19 09:44:00.757425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.119 [2024-11-19 09:44:00.757429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ff580) on tqpair=0x199d690 00:26:14.119 [2024-11-19 09:44:00.757437] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:26:14.119 00:26:14.119 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:14.119 [2024-11-19 09:44:00.803561] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:14.119 [2024-11-19 09:44:00.803612] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432231 ] 00:26:14.383 [2024-11-19 09:44:00.860696] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:26:14.383 [2024-11-19 09:44:00.860762] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:14.383 [2024-11-19 09:44:00.860768] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:14.383 [2024-11-19 09:44:00.860784] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:14.383 [2024-11-19 09:44:00.860797] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:14.383 [2024-11-19 09:44:00.861525] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:26:14.383 [2024-11-19 09:44:00.861566] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x93c690 0 00:26:14.383 [2024-11-19 09:44:00.868178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:14.383 [2024-11-19 09:44:00.868192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:14.383 [2024-11-19 09:44:00.868197] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:14.383 [2024-11-19 09:44:00.868200] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:14.383 [2024-11-19 09:44:00.868238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.383 [2024-11-19 09:44:00.868244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.383 [2024-11-19 09:44:00.868249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93c690) 00:26:14.383 [2024-11-19 09:44:00.868263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:14.383 [2024-11-19 09:44:00.868284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e100, cid 0, qid 0 00:26:14.383 [2024-11-19 09:44:00.876170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.383 [2024-11-19 09:44:00.876185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.383 [2024-11-19 09:44:00.876189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.876194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e100) on tqpair=0x93c690 00:26:14.384 [2024-11-19 09:44:00.876204] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:14.384 [2024-11-19 09:44:00.876212] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:26:14.384 [2024-11-19 09:44:00.876218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:26:14.384 [2024-11-19 09:44:00.876233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.876237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.876240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93c690) 00:26:14.384 [2024-11-19 09:44:00.876249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.384 [2024-11-19 09:44:00.876265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e100, cid 0, qid 0 00:26:14.384 [2024-11-19 09:44:00.876541] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.384 [2024-11-19 09:44:00.876547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.384 [2024-11-19 09:44:00.876551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.876555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e100) on tqpair=0x93c690 00:26:14.384 [2024-11-19 09:44:00.876560] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:26:14.384 [2024-11-19 09:44:00.876568] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:26:14.384 [2024-11-19 09:44:00.876575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.876579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.876582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93c690) 00:26:14.384 [2024-11-19 09:44:00.876589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.384 [2024-11-19 09:44:00.876600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e100, cid 0, qid 0 00:26:14.384 [2024-11-19 09:44:00.876828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.384 [2024-11-19 09:44:00.876835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.384 [2024-11-19 09:44:00.876838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.876842] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e100) on tqpair=0x93c690 00:26:14.384 [2024-11-19 09:44:00.876848] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:26:14.384 [2024-11-19 09:44:00.876857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:14.384 [2024-11-19 09:44:00.876863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.876867] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.876871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93c690) 00:26:14.384 [2024-11-19 09:44:00.876878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.384 [2024-11-19 09:44:00.876888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e100, cid 0, qid 0 00:26:14.384 [2024-11-19 09:44:00.877137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.384 [2024-11-19 09:44:00.877143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.384 [2024-11-19 09:44:00.877149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.877153] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e100) on tqpair=0x93c690 00:26:14.384 [2024-11-19 09:44:00.877166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:14.384 [2024-11-19 09:44:00.877176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.877180] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.877183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93c690) 00:26:14.384 [2024-11-19 09:44:00.877190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.384 [2024-11-19 09:44:00.877201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e100, cid 0, qid 0 00:26:14.384 [2024-11-19 09:44:00.877389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.384 [2024-11-19 09:44:00.877396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.384 [2024-11-19 09:44:00.877399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.877403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e100) on tqpair=0x93c690 00:26:14.384 [2024-11-19 09:44:00.877408] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:14.384 [2024-11-19 09:44:00.877413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:14.384 [2024-11-19 09:44:00.877421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:14.384 [2024-11-19 09:44:00.877530] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:26:14.384 [2024-11-19 09:44:00.877535] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:14.384 [2024-11-19 09:44:00.877543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.877547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.877551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93c690) 00:26:14.384 [2024-11-19 09:44:00.877557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.384 [2024-11-19 09:44:00.877568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e100, cid 0, qid 0 00:26:14.384 [2024-11-19 09:44:00.877834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.384 [2024-11-19 09:44:00.877840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.384 [2024-11-19 09:44:00.877844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.877848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e100) on tqpair=0x93c690 00:26:14.384 [2024-11-19 09:44:00.877852] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:14.384 [2024-11-19 09:44:00.877862] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.877866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.877870] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93c690) 00:26:14.384 [2024-11-19 09:44:00.877876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.384 [2024-11-19 09:44:00.877887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e100, cid 0, qid 0 00:26:14.384 [2024-11-19 09:44:00.878100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.384 [2024-11-19 09:44:00.878109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.384 [2024-11-19 09:44:00.878113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.878117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e100) on tqpair=0x93c690 00:26:14.384 [2024-11-19 09:44:00.878121] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:14.384 [2024-11-19 09:44:00.878126] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:14.384 [2024-11-19 09:44:00.878134] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:26:14.384 [2024-11-19 09:44:00.878146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:14.384 [2024-11-19 09:44:00.878156] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.878164] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93c690) 00:26:14.384 [2024-11-19 09:44:00.878171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.384 [2024-11-19 09:44:00.878182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e100, cid 0, qid 0 00:26:14.384 [2024-11-19 09:44:00.878446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:14.384 [2024-11-19 09:44:00.878453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:14.384 [2024-11-19 09:44:00.878457] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.878461] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x93c690): datao=0, datal=4096, cccid=0 00:26:14.384 [2024-11-19 09:44:00.878466] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x99e100) on tqpair(0x93c690): expected_datao=0, payload_size=4096 00:26:14.384 [2024-11-19 09:44:00.878470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.878478] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.878482] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.878589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.384 [2024-11-19 09:44:00.878596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.384 [2024-11-19 09:44:00.878599] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.878603] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e100) on tqpair=0x93c690 00:26:14.384 [2024-11-19 09:44:00.878612] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:26:14.384 [2024-11-19 09:44:00.878616] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:26:14.384 [2024-11-19 09:44:00.878621] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:26:14.384 [2024-11-19 09:44:00.878632] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:26:14.384 [2024-11-19 09:44:00.878636] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:26:14.384 [2024-11-19 09:44:00.878641] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:26:14.384 [2024-11-19 09:44:00.878653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:14.384 [2024-11-19 09:44:00.878660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.384 [2024-11-19 09:44:00.878664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.878667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93c690) 00:26:14.385 [2024-11-19 09:44:00.878683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:14.385 [2024-11-19 09:44:00.878695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e100, cid 0, qid 0 00:26:14.385 [2024-11-19 09:44:00.878945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.385 [2024-11-19 09:44:00.878951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.385 [2024-11-19 09:44:00.878955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.878959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e100) on tqpair=0x93c690 00:26:14.385 [2024-11-19 09:44:00.878966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.878969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.878973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93c690) 00:26:14.385 [2024-11-19 09:44:00.878979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.385 [2024-11-19 09:44:00.878986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.878989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.878993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x93c690) 00:26:14.385 [2024-11-19 09:44:00.878999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.385 [2024-11-19 09:44:00.879005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.879009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.879012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x93c690) 00:26:14.385 [2024-11-19 09:44:00.879018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.385 [2024-11-19 09:44:00.879024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.879028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.879032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x93c690) 00:26:14.385 [2024-11-19 09:44:00.879038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.385 [2024-11-19 09:44:00.879042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:14.385 [2024-11-19 09:44:00.879051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:14.385 [2024-11-19 09:44:00.879058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.879061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x93c690) 00:26:14.385 [2024-11-19 09:44:00.879068] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.385 [2024-11-19 09:44:00.879080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e100, cid 0, qid 0 00:26:14.385 [2024-11-19 09:44:00.879086] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e280, cid 1, qid 0 00:26:14.385 [2024-11-19 09:44:00.879090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e400, cid 2, qid 0 00:26:14.385 [2024-11-19 09:44:00.879095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e580, cid 3, qid 0 00:26:14.385 [2024-11-19 09:44:00.879100] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e700, cid 4, qid 0 00:26:14.385 [2024-11-19 09:44:00.879369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.385 [2024-11-19 09:44:00.879379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.385 [2024-11-19 09:44:00.879382] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.879386] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e700) on tqpair=0x93c690 00:26:14.385 [2024-11-19 09:44:00.879394] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:26:14.385 [2024-11-19 09:44:00.879399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:14.385 [2024-11-19 09:44:00.879408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:26:14.385 [2024-11-19 09:44:00.879414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:14.385 [2024-11-19 09:44:00.879421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.879425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.879428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x93c690) 00:26:14.385 [2024-11-19 09:44:00.879435] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:14.385 [2024-11-19 09:44:00.879446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e700, cid 4, qid 0 00:26:14.385 [2024-11-19 09:44:00.879719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.385 [2024-11-19 09:44:00.879726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.385 [2024-11-19 09:44:00.879729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.879733] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e700) on tqpair=0x93c690 00:26:14.385 [2024-11-19 09:44:00.879800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:26:14.385 [2024-11-19 09:44:00.879810] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:14.385 [2024-11-19 09:44:00.879818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.879822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x93c690) 00:26:14.385 [2024-11-19 09:44:00.879829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.385 [2024-11-19 09:44:00.879840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e700, cid 4, qid 0 00:26:14.385 [2024-11-19 09:44:00.880062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:14.385 [2024-11-19 09:44:00.880068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:14.385 [2024-11-19 09:44:00.880072] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.880076] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x93c690): datao=0, datal=4096, cccid=4 00:26:14.385 [2024-11-19 09:44:00.880080] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x99e700) on tqpair(0x93c690): expected_datao=0, payload_size=4096 00:26:14.385 [2024-11-19 09:44:00.880085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.880097] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.880101] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.884168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.385 [2024-11-19 09:44:00.884176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.385 [2024-11-19 09:44:00.884180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.884184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e700) on tqpair=0x93c690 00:26:14.385 [2024-11-19 09:44:00.884197] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:26:14.385 [2024-11-19 09:44:00.884208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:26:14.385 [2024-11-19 09:44:00.884218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:26:14.385 [2024-11-19 09:44:00.884225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.884230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x93c690) 00:26:14.385 [2024-11-19 09:44:00.884237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.385 [2024-11-19 09:44:00.884250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e700, cid 4, qid 0 00:26:14.385 [2024-11-19 09:44:00.884445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:14.385 [2024-11-19 09:44:00.884451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:14.385 [2024-11-19 09:44:00.884455] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.884459] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x93c690): datao=0, datal=4096, cccid=4 00:26:14.385 [2024-11-19 09:44:00.884463] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x99e700) on tqpair(0x93c690): expected_datao=0, payload_size=4096 00:26:14.385 [2024-11-19 09:44:00.884467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.884486] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.884490] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.884679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.385 [2024-11-19 09:44:00.884685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.385 [2024-11-19 09:44:00.884688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.884692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e700) on tqpair=0x93c690 00:26:14.385 [2024-11-19 09:44:00.884706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:14.385 [2024-11-19 09:44:00.884715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:14.385 [2024-11-19 09:44:00.884723] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.884727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x93c690) 00:26:14.385 [2024-11-19 09:44:00.884733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.385 [2024-11-19 09:44:00.884744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e700, cid 4, qid 0 00:26:14.385 [2024-11-19 09:44:00.884957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:14.385 [2024-11-19 09:44:00.884963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:14.385 [2024-11-19 09:44:00.884967] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:14.385 [2024-11-19 09:44:00.884970] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x93c690): datao=0, datal=4096, cccid=4 00:26:14.385 [2024-11-19 09:44:00.884975] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x99e700) on tqpair(0x93c690): expected_datao=0, payload_size=4096 00:26:14.386 [2024-11-19 09:44:00.884979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.884991] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.884995] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.885181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.386 [2024-11-19 09:44:00.885188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.386 [2024-11-19 09:44:00.885192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.885196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e700) on tqpair=0x93c690 00:26:14.386 [2024-11-19 09:44:00.885203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:14.386 [2024-11-19 09:44:00.885212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:26:14.386 [2024-11-19 09:44:00.885221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:26:14.386 [2024-11-19 09:44:00.885228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:14.386 [2024-11-19 09:44:00.885233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:14.386 [2024-11-19 09:44:00.885239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:26:14.386 [2024-11-19 09:44:00.885244] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:26:14.386 [2024-11-19 09:44:00.885249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:26:14.386 [2024-11-19 09:44:00.885255] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:26:14.386 [2024-11-19 09:44:00.885270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.885274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x93c690) 00:26:14.386 [2024-11-19 09:44:00.885281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.386 [2024-11-19 09:44:00.885288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.885292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.885295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x93c690) 00:26:14.386 [2024-11-19 09:44:00.885302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.386 [2024-11-19 09:44:00.885316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e700, cid 4, qid 0 00:26:14.386 [2024-11-19 09:44:00.885321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e880, cid 5, qid 0 00:26:14.386 [2024-11-19 09:44:00.885558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.386 [2024-11-19 09:44:00.885564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.386 [2024-11-19 09:44:00.885568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.885572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e700) on tqpair=0x93c690 00:26:14.386 [2024-11-19 09:44:00.885578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.386 [2024-11-19 09:44:00.885584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.386 [2024-11-19 09:44:00.885588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.885592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e880) on tqpair=0x93c690 00:26:14.386 [2024-11-19 09:44:00.885601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.885605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x93c690) 00:26:14.386 [2024-11-19 09:44:00.885612] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.386 [2024-11-19 09:44:00.885625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e880, cid 5, qid 0 00:26:14.386 [2024-11-19 09:44:00.885891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.386 [2024-11-19 09:44:00.885898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.386 [2024-11-19 09:44:00.885901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.885905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e880) on tqpair=0x93c690 00:26:14.386 [2024-11-19 09:44:00.885914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.885918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x93c690) 00:26:14.386 [2024-11-19 09:44:00.885925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.386 [2024-11-19 09:44:00.885935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e880, cid 5, qid 0 00:26:14.386 [2024-11-19 09:44:00.886142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.386 [2024-11-19 09:44:00.886149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.386 [2024-11-19 09:44:00.886152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.886156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e880) on tqpair=0x93c690 00:26:14.386 [2024-11-19 09:44:00.886173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.886177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x93c690) 00:26:14.386 [2024-11-19 09:44:00.886184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.386 [2024-11-19 09:44:00.886194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e880, cid 5, qid 0 00:26:14.386 [2024-11-19 09:44:00.886395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.386 [2024-11-19 09:44:00.886401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.386 [2024-11-19 09:44:00.886405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.886408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e880) on tqpair=0x93c690 00:26:14.386 [2024-11-19 09:44:00.886424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.886428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x93c690) 00:26:14.386 [2024-11-19 09:44:00.886435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.386 [2024-11-19 09:44:00.886442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.886446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x93c690) 00:26:14.386 [2024-11-19 09:44:00.886452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.386 [2024-11-19 09:44:00.886460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.886463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x93c690) 00:26:14.386 [2024-11-19 09:44:00.886470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.386 [2024-11-19 09:44:00.886477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.886481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x93c690) 00:26:14.386 [2024-11-19 09:44:00.886487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.386 [2024-11-19 09:44:00.886501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e880, cid 5, qid 0 00:26:14.386 [2024-11-19 09:44:00.886506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e700, cid 4, qid 0 00:26:14.386 [2024-11-19 09:44:00.886511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99ea00, cid 6, qid 0 00:26:14.386 [2024-11-19 09:44:00.886516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99eb80, cid 7, qid 0 00:26:14.386 [2024-11-19 09:44:00.886852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:14.386 [2024-11-19 09:44:00.886858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:14.386 [2024-11-19 09:44:00.886862] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.886865] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x93c690): datao=0, datal=8192, cccid=5 00:26:14.386 [2024-11-19 09:44:00.886870] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x99e880) on tqpair(0x93c690): expected_datao=0, payload_size=8192 00:26:14.386 [2024-11-19 09:44:00.886874] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.886941] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.886945] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.886951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:14.386 [2024-11-19 09:44:00.886957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:14.386 [2024-11-19 09:44:00.886960] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.886964] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x93c690): datao=0, datal=512, cccid=4 00:26:14.386 [2024-11-19 09:44:00.886969] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x99e700) on tqpair(0x93c690): expected_datao=0, payload_size=512 00:26:14.386 [2024-11-19 09:44:00.886973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.886979] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.886983] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.886989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:14.386 [2024-11-19 09:44:00.886995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:14.386 [2024-11-19 09:44:00.886998] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.887002] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x93c690): datao=0, datal=512, cccid=6 00:26:14.386 [2024-11-19 09:44:00.887006] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x99ea00) on tqpair(0x93c690): expected_datao=0, payload_size=512 00:26:14.386 [2024-11-19 09:44:00.887010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.887017] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.887020] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:14.386 [2024-11-19 09:44:00.887026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:14.386 [2024-11-19 09:44:00.887032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:14.386 [2024-11-19 09:44:00.887035] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:14.387 [2024-11-19 09:44:00.887039] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x93c690): datao=0, datal=4096, cccid=7 00:26:14.387 [2024-11-19 09:44:00.887043] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x99eb80) on tqpair(0x93c690): expected_datao=0, payload_size=4096 00:26:14.387 [2024-11-19 09:44:00.887048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.387 [2024-11-19 09:44:00.887055] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:14.387 [2024-11-19 09:44:00.887058] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:14.387 [2024-11-19 09:44:00.887067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.387 [2024-11-19 09:44:00.887073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.387 [2024-11-19 09:44:00.887081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.387 [2024-11-19 09:44:00.887085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e880) on tqpair=0x93c690 00:26:14.387 [2024-11-19 09:44:00.887098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.387 [2024-11-19 09:44:00.887104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.387 [2024-11-19 09:44:00.887108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.387 [2024-11-19 09:44:00.887112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e700) on tqpair=0x93c690 00:26:14.387 [2024-11-19 09:44:00.887123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.387 [2024-11-19 09:44:00.887129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.387 [2024-11-19 09:44:00.887133] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.387 [2024-11-19 09:44:00.887137] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99ea00) on tqpair=0x93c690 00:26:14.387 [2024-11-19 09:44:00.887144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.387 [2024-11-19 09:44:00.887150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.387 [2024-11-19 09:44:00.887153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.387 [2024-11-19 09:44:00.887157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99eb80) on tqpair=0x93c690 00:26:14.387 ===================================================== 00:26:14.387 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:14.387 ===================================================== 00:26:14.387 Controller Capabilities/Features 00:26:14.387 ================================ 00:26:14.387 Vendor ID: 8086 00:26:14.387 Subsystem Vendor ID: 8086 00:26:14.387 Serial Number: SPDK00000000000001 00:26:14.387 Model Number: SPDK bdev Controller 00:26:14.387 Firmware Version: 25.01 00:26:14.387 Recommended Arb Burst: 6 00:26:14.387 IEEE OUI Identifier: e4 d2 5c 00:26:14.387 Multi-path I/O 00:26:14.387 May have multiple subsystem ports: Yes 00:26:14.387 May have multiple controllers: Yes 00:26:14.387 Associated with SR-IOV VF: No 00:26:14.387 Max Data Transfer Size: 131072 00:26:14.387 Max Number of Namespaces: 32 00:26:14.387 Max Number of I/O Queues: 127 00:26:14.387 NVMe Specification Version (VS): 1.3 00:26:14.387 NVMe Specification Version (Identify): 1.3 00:26:14.387 Maximum Queue Entries: 128 00:26:14.387 Contiguous Queues Required: Yes 00:26:14.387 Arbitration Mechanisms Supported 00:26:14.387 Weighted Round Robin: Not Supported 00:26:14.387 Vendor Specific: Not Supported 00:26:14.387 Reset Timeout: 15000 ms 00:26:14.387 Doorbell Stride: 4 bytes 00:26:14.387 NVM Subsystem Reset: Not Supported 00:26:14.387 Command Sets Supported 00:26:14.387 NVM Command Set: Supported 00:26:14.387 Boot Partition: Not Supported 00:26:14.387 Memory Page Size Minimum: 4096 bytes 00:26:14.387 Memory Page Size Maximum: 4096 bytes 00:26:14.387 Persistent Memory Region: Not Supported 00:26:14.387 Optional Asynchronous Events Supported 00:26:14.387 Namespace Attribute Notices: Supported 00:26:14.387 Firmware Activation Notices: Not Supported 00:26:14.387 ANA Change Notices: Not Supported 00:26:14.387 PLE Aggregate Log Change Notices: Not Supported 00:26:14.387 LBA Status Info Alert Notices: Not Supported 00:26:14.387 EGE Aggregate Log Change Notices: Not Supported 00:26:14.387 Normal NVM Subsystem Shutdown event: Not Supported 00:26:14.387 Zone Descriptor Change Notices: Not Supported 00:26:14.387 Discovery Log Change Notices: Not Supported 00:26:14.387 Controller Attributes 00:26:14.387 128-bit Host Identifier: Supported 00:26:14.387 Non-Operational Permissive Mode: Not Supported 00:26:14.387 NVM Sets: Not Supported 00:26:14.387 Read Recovery Levels: Not Supported 00:26:14.387 Endurance Groups: Not Supported 00:26:14.387 Predictable Latency Mode: Not Supported 00:26:14.387 Traffic Based Keep ALive: Not Supported 00:26:14.387 Namespace Granularity: Not Supported 00:26:14.387 SQ Associations: Not Supported 00:26:14.387 UUID List: Not Supported 00:26:14.387 Multi-Domain Subsystem: Not Supported 00:26:14.387 Fixed Capacity Management: Not Supported 00:26:14.387 Variable Capacity Management: Not Supported 00:26:14.387 Delete Endurance Group: Not Supported 00:26:14.387 Delete NVM Set: Not Supported 00:26:14.387 Extended LBA Formats Supported: Not Supported 00:26:14.387 Flexible Data Placement Supported: Not Supported 00:26:14.387 00:26:14.387 Controller Memory Buffer Support 00:26:14.387 ================================ 00:26:14.387 Supported: No 00:26:14.387 00:26:14.387 Persistent Memory Region Support 00:26:14.387 ================================ 00:26:14.387 Supported: No 00:26:14.387 00:26:14.387 Admin Command Set Attributes 00:26:14.387 ============================ 00:26:14.387 Security Send/Receive: Not Supported 00:26:14.387 Format NVM: Not Supported 00:26:14.387 Firmware Activate/Download: Not Supported 00:26:14.387 Namespace Management: Not Supported 00:26:14.387 Device Self-Test: Not Supported 00:26:14.387 Directives: Not Supported 00:26:14.387 NVMe-MI: Not Supported 00:26:14.387 Virtualization Management: Not Supported 00:26:14.387 Doorbell Buffer Config: Not Supported 00:26:14.387 Get LBA Status Capability: Not Supported 00:26:14.387 Command & Feature Lockdown Capability: Not Supported 00:26:14.387 Abort Command Limit: 4 00:26:14.387 Async Event Request Limit: 4 00:26:14.387 Number of Firmware Slots: N/A 00:26:14.387 Firmware Slot 1 Read-Only: N/A 00:26:14.387 Firmware Activation Without Reset: N/A 00:26:14.387 Multiple Update Detection Support: N/A 00:26:14.387 Firmware Update Granularity: No Information Provided 00:26:14.387 Per-Namespace SMART Log: No 00:26:14.387 Asymmetric Namespace Access Log Page: Not Supported 00:26:14.387 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:14.387 Command Effects Log Page: Supported 00:26:14.387 Get Log Page Extended Data: Supported 00:26:14.387 Telemetry Log Pages: Not Supported 00:26:14.387 Persistent Event Log Pages: Not Supported 00:26:14.387 Supported Log Pages Log Page: May Support 00:26:14.387 Commands Supported & Effects Log Page: Not Supported 00:26:14.387 Feature Identifiers & Effects Log Page:May Support 00:26:14.387 NVMe-MI Commands & Effects Log Page: May Support 00:26:14.387 Data Area 4 for Telemetry Log: Not Supported 00:26:14.387 Error Log Page Entries Supported: 128 00:26:14.387 Keep Alive: Supported 00:26:14.387 Keep Alive Granularity: 10000 ms 00:26:14.387 00:26:14.387 NVM Command Set Attributes 00:26:14.387 ========================== 00:26:14.387 Submission Queue Entry Size 00:26:14.387 Max: 64 00:26:14.387 Min: 64 00:26:14.387 Completion Queue Entry Size 00:26:14.387 Max: 16 00:26:14.387 Min: 16 00:26:14.387 Number of Namespaces: 32 00:26:14.387 Compare Command: Supported 00:26:14.387 Write Uncorrectable Command: Not Supported 00:26:14.387 Dataset Management Command: Supported 00:26:14.387 Write Zeroes Command: Supported 00:26:14.387 Set Features Save Field: Not Supported 00:26:14.387 Reservations: Supported 00:26:14.387 Timestamp: Not Supported 00:26:14.387 Copy: Supported 00:26:14.387 Volatile Write Cache: Present 00:26:14.387 Atomic Write Unit (Normal): 1 00:26:14.387 Atomic Write Unit (PFail): 1 00:26:14.387 Atomic Compare & Write Unit: 1 00:26:14.387 Fused Compare & Write: Supported 00:26:14.387 Scatter-Gather List 00:26:14.387 SGL Command Set: Supported 00:26:14.387 SGL Keyed: Supported 00:26:14.387 SGL Bit Bucket Descriptor: Not Supported 00:26:14.387 SGL Metadata Pointer: Not Supported 00:26:14.387 Oversized SGL: Not Supported 00:26:14.387 SGL Metadata Address: Not Supported 00:26:14.387 SGL Offset: Supported 00:26:14.387 Transport SGL Data Block: Not Supported 00:26:14.387 Replay Protected Memory Block: Not Supported 00:26:14.387 00:26:14.387 Firmware Slot Information 00:26:14.387 ========================= 00:26:14.387 Active slot: 1 00:26:14.387 Slot 1 Firmware Revision: 25.01 00:26:14.387 00:26:14.387 00:26:14.387 Commands Supported and Effects 00:26:14.387 ============================== 00:26:14.387 Admin Commands 00:26:14.387 -------------- 00:26:14.387 Get Log Page (02h): Supported 00:26:14.387 Identify (06h): Supported 00:26:14.387 Abort (08h): Supported 00:26:14.387 Set Features (09h): Supported 00:26:14.387 Get Features (0Ah): Supported 00:26:14.388 Asynchronous Event Request (0Ch): Supported 00:26:14.388 Keep Alive (18h): Supported 00:26:14.388 I/O Commands 00:26:14.388 ------------ 00:26:14.388 Flush (00h): Supported LBA-Change 00:26:14.388 Write (01h): Supported LBA-Change 00:26:14.388 Read (02h): Supported 00:26:14.388 Compare (05h): Supported 00:26:14.388 Write Zeroes (08h): Supported LBA-Change 00:26:14.388 Dataset Management (09h): Supported LBA-Change 00:26:14.388 Copy (19h): Supported LBA-Change 00:26:14.388 00:26:14.388 Error Log 00:26:14.388 ========= 00:26:14.388 00:26:14.388 Arbitration 00:26:14.388 =========== 00:26:14.388 Arbitration Burst: 1 00:26:14.388 00:26:14.388 Power Management 00:26:14.388 ================ 00:26:14.388 Number of Power States: 1 00:26:14.388 Current Power State: Power State #0 00:26:14.388 Power State #0: 00:26:14.388 Max Power: 0.00 W 00:26:14.388 Non-Operational State: Operational 00:26:14.388 Entry Latency: Not Reported 00:26:14.388 Exit Latency: Not Reported 00:26:14.388 Relative Read Throughput: 0 00:26:14.388 Relative Read Latency: 0 00:26:14.388 Relative Write Throughput: 0 00:26:14.388 Relative Write Latency: 0 00:26:14.388 Idle Power: Not Reported 00:26:14.388 Active Power: Not Reported 00:26:14.388 Non-Operational Permissive Mode: Not Supported 00:26:14.388 00:26:14.388 Health Information 00:26:14.388 ================== 00:26:14.388 Critical Warnings: 00:26:14.388 Available Spare Space: OK 00:26:14.388 Temperature: OK 00:26:14.388 Device Reliability: OK 00:26:14.388 Read Only: No 00:26:14.388 Volatile Memory Backup: OK 00:26:14.388 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:14.388 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:14.388 Available Spare: 0% 00:26:14.388 Available Spare Threshold: 0% 00:26:14.388 Life Percentage Used:[2024-11-19 09:44:00.887269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.388 [2024-11-19 09:44:00.887274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x93c690) 00:26:14.388 [2024-11-19 09:44:00.887281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.388 [2024-11-19 09:44:00.887293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99eb80, cid 7, qid 0 00:26:14.388 [2024-11-19 09:44:00.887513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.388 [2024-11-19 09:44:00.887520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.388 [2024-11-19 09:44:00.887523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.388 [2024-11-19 09:44:00.887527] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99eb80) on tqpair=0x93c690 00:26:14.388 [2024-11-19 09:44:00.887561] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:26:14.388 [2024-11-19 09:44:00.887571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e100) on tqpair=0x93c690 00:26:14.388 [2024-11-19 09:44:00.887577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.388 [2024-11-19 09:44:00.887583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e280) on tqpair=0x93c690 00:26:14.388 [2024-11-19 09:44:00.887588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.388 [2024-11-19 09:44:00.887593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e400) on tqpair=0x93c690 00:26:14.388 [2024-11-19 09:44:00.887597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.388 [2024-11-19 09:44:00.887602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e580) on tqpair=0x93c690 00:26:14.388 [2024-11-19 09:44:00.887607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.388 [2024-11-19 09:44:00.887616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.388 [2024-11-19 09:44:00.887620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.388 [2024-11-19 09:44:00.887623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x93c690) 00:26:14.388 [2024-11-19 09:44:00.887630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.388 [2024-11-19 09:44:00.887645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e580, cid 3, qid 0 00:26:14.388 [2024-11-19 09:44:00.887842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.388 [2024-11-19 09:44:00.887848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.388 [2024-11-19 09:44:00.887852] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.388 [2024-11-19 09:44:00.887856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e580) on tqpair=0x93c690 00:26:14.388 [2024-11-19 09:44:00.887863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.388 [2024-11-19 09:44:00.887867] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.388 [2024-11-19 09:44:00.887870] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x93c690) 00:26:14.388 [2024-11-19 09:44:00.887877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.388 [2024-11-19 09:44:00.887890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e580, cid 3, qid 0 00:26:14.388 [2024-11-19 09:44:00.888116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.388 [2024-11-19 09:44:00.888122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.388 [2024-11-19 09:44:00.888126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.388 [2024-11-19 09:44:00.888130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e580) on tqpair=0x93c690 00:26:14.388 [2024-11-19 09:44:00.888135] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:26:14.388 [2024-11-19 09:44:00.888139] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:26:14.388 [2024-11-19 09:44:00.888149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:14.388 [2024-11-19 09:44:00.888153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:14.388 [2024-11-19 09:44:00.888156] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x93c690) 00:26:14.388 [2024-11-19 09:44:00.892177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.388 [2024-11-19 09:44:00.892190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x99e580, cid 3, qid 0 00:26:14.388 [2024-11-19 09:44:00.892408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:14.388 [2024-11-19 09:44:00.892415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:14.388 [2024-11-19 09:44:00.892418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:14.388 [2024-11-19 09:44:00.892422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x99e580) on tqpair=0x93c690 00:26:14.388 [2024-11-19 09:44:00.892430] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:26:14.388 0% 00:26:14.388 Data Units Read: 0 00:26:14.388 Data Units Written: 0 00:26:14.388 Host Read Commands: 0 00:26:14.388 Host Write Commands: 0 00:26:14.388 Controller Busy Time: 0 minutes 00:26:14.388 Power Cycles: 0 00:26:14.388 Power On Hours: 0 hours 00:26:14.388 Unsafe Shutdowns: 0 00:26:14.388 Unrecoverable Media Errors: 0 00:26:14.388 Lifetime Error Log Entries: 0 00:26:14.388 Warning Temperature Time: 0 minutes 00:26:14.388 Critical Temperature Time: 0 minutes 00:26:14.388 00:26:14.388 Number of Queues 00:26:14.388 ================ 00:26:14.388 Number of I/O Submission Queues: 127 00:26:14.388 Number of I/O Completion Queues: 127 00:26:14.388 00:26:14.388 Active Namespaces 00:26:14.388 ================= 00:26:14.388 Namespace ID:1 00:26:14.388 Error Recovery Timeout: Unlimited 00:26:14.388 Command Set Identifier: NVM (00h) 00:26:14.388 Deallocate: Supported 00:26:14.388 Deallocated/Unwritten Error: Not Supported 00:26:14.388 Deallocated Read Value: Unknown 00:26:14.388 Deallocate in Write Zeroes: Not Supported 00:26:14.388 Deallocated Guard Field: 0xFFFF 00:26:14.388 Flush: Supported 00:26:14.388 Reservation: Supported 00:26:14.388 Namespace Sharing Capabilities: Multiple Controllers 00:26:14.388 Size (in LBAs): 131072 (0GiB) 00:26:14.388 Capacity (in LBAs): 131072 (0GiB) 00:26:14.388 Utilization (in LBAs): 131072 (0GiB) 00:26:14.389 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:14.389 EUI64: ABCDEF0123456789 00:26:14.389 UUID: 4ad578ca-aceb-4758-968e-44dd0444b2e5 00:26:14.389 Thin Provisioning: Not Supported 00:26:14.389 Per-NS Atomic Units: Yes 00:26:14.389 Atomic Boundary Size (Normal): 0 00:26:14.389 Atomic Boundary Size (PFail): 0 00:26:14.389 Atomic Boundary Offset: 0 00:26:14.389 Maximum Single Source Range Length: 65535 00:26:14.389 Maximum Copy Length: 65535 00:26:14.389 Maximum Source Range Count: 1 00:26:14.389 NGUID/EUI64 Never Reused: No 00:26:14.389 Namespace Write Protected: No 00:26:14.389 Number of LBA Formats: 1 00:26:14.389 Current LBA Format: LBA Format #00 00:26:14.389 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:14.389 00:26:14.389 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:14.389 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:14.389 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.389 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:14.389 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.389 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:14.389 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:14.389 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:14.389 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:26:14.389 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:14.389 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:26:14.389 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:14.389 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:14.389 rmmod nvme_tcp 00:26:14.389 rmmod nvme_fabrics 00:26:14.389 rmmod nvme_keyring 00:26:14.389 09:44:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:14.389 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:26:14.389 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:26:14.389 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 431994 ']' 00:26:14.389 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 431994 00:26:14.389 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 431994 ']' 00:26:14.389 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 431994 00:26:14.389 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:26:14.389 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.389 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 431994 00:26:14.389 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:14.389 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:14.389 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 431994' 00:26:14.389 killing process with pid 431994 00:26:14.389 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 431994 00:26:14.389 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 431994 00:26:14.650 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:14.650 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:14.650 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:14.650 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:26:14.650 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:26:14.650 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:14.650 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:26:14.650 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:14.650 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:14.650 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.650 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.650 09:44:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:17.197 00:26:17.197 real 0m11.570s 00:26:17.197 user 0m8.436s 00:26:17.197 sys 0m6.144s 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:17.197 ************************************ 00:26:17.197 END TEST nvmf_identify 00:26:17.197 ************************************ 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.197 ************************************ 00:26:17.197 START TEST nvmf_perf 00:26:17.197 ************************************ 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:17.197 * Looking for test storage... 00:26:17.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:17.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.197 --rc genhtml_branch_coverage=1 00:26:17.197 --rc genhtml_function_coverage=1 00:26:17.197 --rc genhtml_legend=1 00:26:17.197 --rc geninfo_all_blocks=1 00:26:17.197 --rc geninfo_unexecuted_blocks=1 00:26:17.197 00:26:17.197 ' 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:17.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.197 --rc genhtml_branch_coverage=1 00:26:17.197 --rc genhtml_function_coverage=1 00:26:17.197 --rc genhtml_legend=1 00:26:17.197 --rc geninfo_all_blocks=1 00:26:17.197 --rc geninfo_unexecuted_blocks=1 00:26:17.197 00:26:17.197 ' 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:17.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.197 --rc genhtml_branch_coverage=1 00:26:17.197 --rc genhtml_function_coverage=1 00:26:17.197 --rc genhtml_legend=1 00:26:17.197 --rc geninfo_all_blocks=1 00:26:17.197 --rc geninfo_unexecuted_blocks=1 00:26:17.197 00:26:17.197 ' 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:17.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.197 --rc genhtml_branch_coverage=1 00:26:17.197 --rc genhtml_function_coverage=1 00:26:17.197 --rc genhtml_legend=1 00:26:17.197 --rc geninfo_all_blocks=1 00:26:17.197 --rc geninfo_unexecuted_blocks=1 00:26:17.197 00:26:17.197 ' 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.197 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:17.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:17.198 09:44:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:25.342 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:25.342 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:25.342 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:25.342 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:25.342 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:25.343 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:25.343 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:25.343 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:25.343 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:25.343 09:44:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:25.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:26:25.343 00:26:25.343 --- 10.0.0.2 ping statistics --- 00:26:25.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.343 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:25.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:26:25.343 00:26:25.343 --- 10.0.0.1 ping statistics --- 00:26:25.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.343 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=436548 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 436548 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 436548 ']' 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.343 09:44:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:25.343 [2024-11-19 09:44:11.190255] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:25.343 [2024-11-19 09:44:11.190324] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.343 [2024-11-19 09:44:11.290819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:25.343 [2024-11-19 09:44:11.343512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.343 [2024-11-19 09:44:11.343564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.343 [2024-11-19 09:44:11.343573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.343 [2024-11-19 09:44:11.343580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.343 [2024-11-19 09:44:11.343587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.343 [2024-11-19 09:44:11.345590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.343 [2024-11-19 09:44:11.345751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:25.343 [2024-11-19 09:44:11.345911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:25.343 [2024-11-19 09:44:11.345912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.343 09:44:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.343 09:44:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:26:25.343 09:44:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:25.343 09:44:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:25.343 09:44:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:25.343 09:44:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.343 09:44:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:25.343 09:44:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:25.915 09:44:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:25.915 09:44:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:26.177 09:44:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:26:26.177 09:44:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:26.438 09:44:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:26.438 09:44:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:26:26.438 09:44:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:26.438 09:44:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:26.438 09:44:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:26.438 [2024-11-19 09:44:13.183080] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.699 09:44:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:26.699 09:44:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:26.699 09:44:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:26.960 09:44:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:26.960 09:44:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:27.222 09:44:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:27.482 [2024-11-19 09:44:13.986806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.482 09:44:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:27.482 09:44:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:26:27.482 09:44:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:27.482 09:44:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:27.482 09:44:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:28.865 Initializing NVMe Controllers 00:26:28.865 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:26:28.865 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:26:28.865 Initialization complete. Launching workers. 00:26:28.865 ======================================================== 00:26:28.865 Latency(us) 00:26:28.865 Device Information : IOPS MiB/s Average min max 00:26:28.865 PCIE (0000:65:00.0) NSID 1 from core 0: 79608.02 310.97 401.10 13.23 5206.04 00:26:28.866 ======================================================== 00:26:28.866 Total : 79608.02 310.97 401.10 13.23 5206.04 00:26:28.866 00:26:28.866 09:44:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:30.250 Initializing NVMe Controllers 00:26:30.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:30.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:30.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:30.250 Initialization complete. Launching workers. 00:26:30.250 ======================================================== 00:26:30.250 Latency(us) 00:26:30.250 Device Information : IOPS MiB/s Average min max 00:26:30.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.00 0.31 13044.85 102.75 45987.44 00:26:30.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17974.84 6983.92 50879.13 00:26:30.250 ======================================================== 00:26:30.250 Total : 135.00 0.53 15089.88 102.75 50879.13 00:26:30.250 00:26:30.250 09:44:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:31.633 Initializing NVMe Controllers 00:26:31.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:31.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:31.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:31.633 Initialization complete. Launching workers. 00:26:31.633 ======================================================== 00:26:31.633 Latency(us) 00:26:31.633 Device Information : IOPS MiB/s Average min max 00:26:31.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11623.04 45.40 2755.22 452.53 6267.52 00:26:31.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3835.68 14.98 8387.66 7245.55 15846.02 00:26:31.633 ======================================================== 00:26:31.633 Total : 15458.72 60.39 4152.77 452.53 15846.02 00:26:31.633 00:26:31.633 09:44:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:31.633 09:44:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:31.633 09:44:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:34.183 Initializing NVMe Controllers 00:26:34.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:34.183 Controller IO queue size 128, less than required. 00:26:34.183 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:34.183 Controller IO queue size 128, less than required. 00:26:34.183 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:34.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:34.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:34.183 Initialization complete. Launching workers. 00:26:34.183 ======================================================== 00:26:34.183 Latency(us) 00:26:34.183 Device Information : IOPS MiB/s Average min max 00:26:34.184 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1943.98 485.99 67228.58 38994.44 117092.52 00:26:34.184 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 619.99 155.00 215181.74 71909.68 310795.72 00:26:34.184 ======================================================== 00:26:34.184 Total : 2563.97 640.99 103005.08 38994.44 310795.72 00:26:34.184 00:26:34.184 09:44:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:34.184 No valid NVMe controllers or AIO or URING devices found 00:26:34.184 Initializing NVMe Controllers 00:26:34.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:34.184 Controller IO queue size 128, less than required. 00:26:34.184 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:34.184 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:34.184 Controller IO queue size 128, less than required. 00:26:34.184 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:34.184 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:34.184 WARNING: Some requested NVMe devices were skipped 00:26:34.445 09:44:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:36.987 Initializing NVMe Controllers 00:26:36.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:36.988 Controller IO queue size 128, less than required. 00:26:36.988 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:36.988 Controller IO queue size 128, less than required. 00:26:36.988 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:36.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:36.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:36.988 Initialization complete. Launching workers. 00:26:36.988 00:26:36.988 ==================== 00:26:36.988 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:36.988 TCP transport: 00:26:36.988 polls: 33673 00:26:36.988 idle_polls: 19947 00:26:36.988 sock_completions: 13726 00:26:36.988 nvme_completions: 7199 00:26:36.988 submitted_requests: 10740 00:26:36.988 queued_requests: 1 00:26:36.988 00:26:36.988 ==================== 00:26:36.988 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:36.988 TCP transport: 00:26:36.988 polls: 30508 00:26:36.988 idle_polls: 15320 00:26:36.988 sock_completions: 15188 00:26:36.988 nvme_completions: 7545 00:26:36.988 submitted_requests: 11242 00:26:36.988 queued_requests: 1 00:26:36.988 ======================================================== 00:26:36.988 Latency(us) 00:26:36.988 Device Information : IOPS MiB/s Average min max 00:26:36.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1799.44 449.86 72371.94 34241.73 128474.37 00:26:36.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1885.94 471.48 68874.07 30483.97 106193.32 00:26:36.988 ======================================================== 00:26:36.988 Total : 3685.38 921.35 70581.96 30483.97 128474.37 00:26:36.988 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:36.988 rmmod nvme_tcp 00:26:36.988 rmmod nvme_fabrics 00:26:36.988 rmmod nvme_keyring 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 436548 ']' 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 436548 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 436548 ']' 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 436548 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:36.988 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 436548 00:26:37.249 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:37.249 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:37.249 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 436548' 00:26:37.249 killing process with pid 436548 00:26:37.249 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 436548 00:26:37.249 09:44:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 436548 00:26:39.160 09:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:39.160 09:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:39.160 09:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:39.160 09:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:26:39.160 09:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:26:39.160 09:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:39.160 09:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:26:39.160 09:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:39.160 09:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:39.160 09:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.160 09:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.160 09:44:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.073 09:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:41.334 00:26:41.334 real 0m24.390s 00:26:41.334 user 0m59.120s 00:26:41.334 sys 0m8.649s 00:26:41.334 09:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:41.334 09:44:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:41.334 ************************************ 00:26:41.334 END TEST nvmf_perf 00:26:41.334 ************************************ 00:26:41.334 09:44:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:41.334 09:44:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:41.334 09:44:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:41.334 09:44:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.334 ************************************ 00:26:41.334 START TEST nvmf_fio_host 00:26:41.334 ************************************ 00:26:41.334 09:44:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:41.334 * Looking for test storage... 00:26:41.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:41.334 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:41.334 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:41.334 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:41.595 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:41.595 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.595 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.595 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.595 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.595 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.595 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.595 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.595 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.595 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.595 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.595 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:41.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.596 --rc genhtml_branch_coverage=1 00:26:41.596 --rc genhtml_function_coverage=1 00:26:41.596 --rc genhtml_legend=1 00:26:41.596 --rc geninfo_all_blocks=1 00:26:41.596 --rc geninfo_unexecuted_blocks=1 00:26:41.596 00:26:41.596 ' 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:41.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.596 --rc genhtml_branch_coverage=1 00:26:41.596 --rc genhtml_function_coverage=1 00:26:41.596 --rc genhtml_legend=1 00:26:41.596 --rc geninfo_all_blocks=1 00:26:41.596 --rc geninfo_unexecuted_blocks=1 00:26:41.596 00:26:41.596 ' 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:41.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.596 --rc genhtml_branch_coverage=1 00:26:41.596 --rc genhtml_function_coverage=1 00:26:41.596 --rc genhtml_legend=1 00:26:41.596 --rc geninfo_all_blocks=1 00:26:41.596 --rc geninfo_unexecuted_blocks=1 00:26:41.596 00:26:41.596 ' 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:41.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.596 --rc genhtml_branch_coverage=1 00:26:41.596 --rc genhtml_function_coverage=1 00:26:41.596 --rc genhtml_legend=1 00:26:41.596 --rc geninfo_all_blocks=1 00:26:41.596 --rc geninfo_unexecuted_blocks=1 00:26:41.596 00:26:41.596 ' 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.596 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:41.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:41.597 09:44:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:49.736 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:49.736 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:49.736 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:49.736 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:49.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:26:49.736 00:26:49.736 --- 10.0.0.2 ping statistics --- 00:26:49.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.736 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:49.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:26:49.736 00:26:49.736 --- 10.0.0.1 ping statistics --- 00:26:49.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.736 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.736 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=443558 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 443558 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 443558 ']' 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:49.737 09:44:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.737 [2024-11-19 09:44:35.646990] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:49.737 [2024-11-19 09:44:35.647056] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.737 [2024-11-19 09:44:35.746605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:49.737 [2024-11-19 09:44:35.799450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:49.737 [2024-11-19 09:44:35.799502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:49.737 [2024-11-19 09:44:35.799511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:49.737 [2024-11-19 09:44:35.799518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:49.737 [2024-11-19 09:44:35.799525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:49.737 [2024-11-19 09:44:35.801544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.737 [2024-11-19 09:44:35.801708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:49.737 [2024-11-19 09:44:35.801869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.737 [2024-11-19 09:44:35.801869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:49.737 09:44:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:49.737 09:44:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:26:49.737 09:44:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:49.998 [2024-11-19 09:44:36.637744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:49.998 09:44:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:49.998 09:44:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:49.998 09:44:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.998 09:44:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:50.259 Malloc1 00:26:50.259 09:44:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:50.521 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:50.782 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.782 [2024-11-19 09:44:37.507042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:51.043 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:51.328 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:51.329 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:51.329 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:51.329 09:44:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:51.591 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:51.591 fio-3.35 00:26:51.591 Starting 1 thread 00:26:54.134 00:26:54.134 test: (groupid=0, jobs=1): err= 0: pid=444149: Tue Nov 19 09:44:40 2024 00:26:54.134 read: IOPS=13.8k, BW=54.0MiB/s (56.6MB/s)(108MiB/2005msec) 00:26:54.134 slat (usec): min=2, max=284, avg= 2.14, stdev= 2.45 00:26:54.134 clat (usec): min=3149, max=8502, avg=5091.77, stdev=366.49 00:26:54.134 lat (usec): min=3151, max=8504, avg=5093.91, stdev=366.63 00:26:54.134 clat percentiles (usec): 00:26:54.134 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:26:54.134 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:26:54.134 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5538], 95.00th=[ 5604], 00:26:54.134 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 7570], 99.95th=[ 7767], 00:26:54.134 | 99.99th=[ 8455] 00:26:54.134 bw ( KiB/s): min=54168, max=55888, per=100.00%, avg=55338.00, stdev=789.14, samples=4 00:26:54.134 iops : min=13542, max=13972, avg=13834.50, stdev=197.28, samples=4 00:26:54.134 write: IOPS=13.8k, BW=54.0MiB/s (56.7MB/s)(108MiB/2005msec); 0 zone resets 00:26:54.134 slat (usec): min=2, max=266, avg= 2.21, stdev= 1.77 00:26:54.134 clat (usec): min=2622, max=8297, avg=4116.56, stdev=320.44 00:26:54.134 lat (usec): min=2624, max=8299, avg=4118.77, stdev=320.63 00:26:54.134 clat percentiles (usec): 00:26:54.134 | 1.00th=[ 3425], 5.00th=[ 3687], 10.00th=[ 3785], 20.00th=[ 3884], 00:26:54.134 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:26:54.134 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4555], 00:26:54.134 | 99.00th=[ 4817], 99.50th=[ 5669], 99.90th=[ 6587], 99.95th=[ 7767], 00:26:54.134 | 99.99th=[ 8225] 00:26:54.134 bw ( KiB/s): min=54552, max=55656, per=99.97%, avg=55308.00, stdev=511.23, samples=4 00:26:54.134 iops : min=13638, max=13914, avg=13827.00, stdev=127.81, samples=4 00:26:54.134 lat (msec) : 4=17.26%, 10=82.74% 00:26:54.134 cpu : usr=75.80%, sys=23.05%, ctx=32, majf=0, minf=17 00:26:54.134 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:54.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:54.134 issued rwts: total=27719,27731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.134 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:54.134 00:26:54.134 Run status group 0 (all jobs): 00:26:54.134 READ: bw=54.0MiB/s (56.6MB/s), 54.0MiB/s-54.0MiB/s (56.6MB/s-56.6MB/s), io=108MiB (114MB), run=2005-2005msec 00:26:54.134 WRITE: bw=54.0MiB/s (56.7MB/s), 54.0MiB/s-54.0MiB/s (56.7MB/s-56.7MB/s), io=108MiB (114MB), run=2005-2005msec 00:26:54.134 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:54.134 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:54.134 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:54.134 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:54.134 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:54.135 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:54.135 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:54.135 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:54.135 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:54.135 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:54.135 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:54.135 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:54.135 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:54.135 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:54.135 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:54.135 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:54.135 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:54.135 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:54.135 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:54.135 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:54.135 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:54.135 09:44:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:54.396 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:54.396 fio-3.35 00:26:54.396 Starting 1 thread 00:26:56.940 00:26:56.940 test: (groupid=0, jobs=1): err= 0: pid=444974: Tue Nov 19 09:44:43 2024 00:26:56.940 read: IOPS=9559, BW=149MiB/s (157MB/s)(299MiB/2003msec) 00:26:56.940 slat (usec): min=3, max=110, avg= 3.61, stdev= 1.59 00:26:56.940 clat (usec): min=1951, max=16292, avg=8221.11, stdev=1964.39 00:26:56.940 lat (usec): min=1955, max=16295, avg=8224.72, stdev=1964.51 00:26:56.940 clat percentiles (usec): 00:26:56.940 | 1.00th=[ 4293], 5.00th=[ 5276], 10.00th=[ 5735], 20.00th=[ 6390], 00:26:56.940 | 30.00th=[ 6980], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 8717], 00:26:56.940 | 70.00th=[ 9241], 80.00th=[10028], 90.00th=[10945], 95.00th=[11338], 00:26:56.940 | 99.00th=[12911], 99.50th=[13304], 99.90th=[14222], 99.95th=[14877], 00:26:56.940 | 99.99th=[16319] 00:26:56.940 bw ( KiB/s): min=70016, max=82784, per=49.17%, avg=75208.00, stdev=5523.52, samples=4 00:26:56.940 iops : min= 4376, max= 5174, avg=4700.50, stdev=345.22, samples=4 00:26:56.940 write: IOPS=5534, BW=86.5MiB/s (90.7MB/s)(154MiB/1778msec); 0 zone resets 00:26:56.940 slat (usec): min=39, max=450, avg=40.99, stdev= 8.32 00:26:56.940 clat (usec): min=2549, max=15211, avg=9074.01, stdev=1451.36 00:26:56.940 lat (usec): min=2598, max=15348, avg=9114.99, stdev=1453.30 00:26:56.940 clat percentiles (usec): 00:26:56.940 | 1.00th=[ 5866], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 7898], 00:26:56.940 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:26:56.940 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[11076], 95.00th=[11600], 00:26:56.940 | 99.00th=[12780], 99.50th=[13304], 99.90th=[15008], 99.95th=[15008], 00:26:56.940 | 99.99th=[15270] 00:26:56.940 bw ( KiB/s): min=73088, max=86240, per=88.43%, avg=78312.00, stdev=5600.42, samples=4 00:26:56.940 iops : min= 4568, max= 5390, avg=4894.50, stdev=350.03, samples=4 00:26:56.940 lat (msec) : 2=0.01%, 4=0.60%, 10=77.56%, 20=21.84% 00:26:56.940 cpu : usr=83.62%, sys=15.03%, ctx=23, majf=0, minf=29 00:26:56.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:56.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:56.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:56.940 issued rwts: total=19147,9841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:56.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:56.940 00:26:56.940 Run status group 0 (all jobs): 00:26:56.940 READ: bw=149MiB/s (157MB/s), 149MiB/s-149MiB/s (157MB/s-157MB/s), io=299MiB (314MB), run=2003-2003msec 00:26:56.940 WRITE: bw=86.5MiB/s (90.7MB/s), 86.5MiB/s-86.5MiB/s (90.7MB/s-90.7MB/s), io=154MiB (161MB), run=1778-1778msec 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:56.940 rmmod nvme_tcp 00:26:56.940 rmmod nvme_fabrics 00:26:56.940 rmmod nvme_keyring 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 443558 ']' 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 443558 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 443558 ']' 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 443558 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:56.940 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 443558 00:26:57.201 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:57.201 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:57.201 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 443558' 00:26:57.201 killing process with pid 443558 00:26:57.201 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 443558 00:26:57.201 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 443558 00:26:57.201 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:57.201 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:57.201 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:57.201 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:26:57.201 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:57.201 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:26:57.201 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:57.201 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:57.201 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:57.201 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.201 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.201 09:44:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.746 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:59.746 00:26:59.746 real 0m17.985s 00:26:59.746 user 1m7.395s 00:26:59.746 sys 0m7.713s 00:26:59.746 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:59.746 09:44:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.746 ************************************ 00:26:59.746 END TEST nvmf_fio_host 00:26:59.746 ************************************ 00:26:59.746 09:44:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:59.746 09:44:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:59.746 09:44:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:59.746 09:44:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.746 ************************************ 00:26:59.746 START TEST nvmf_failover 00:26:59.746 ************************************ 00:26:59.746 09:44:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:59.746 * Looking for test storage... 00:26:59.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:59.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.746 --rc genhtml_branch_coverage=1 00:26:59.746 --rc genhtml_function_coverage=1 00:26:59.746 --rc genhtml_legend=1 00:26:59.746 --rc geninfo_all_blocks=1 00:26:59.746 --rc geninfo_unexecuted_blocks=1 00:26:59.746 00:26:59.746 ' 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:59.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.746 --rc genhtml_branch_coverage=1 00:26:59.746 --rc genhtml_function_coverage=1 00:26:59.746 --rc genhtml_legend=1 00:26:59.746 --rc geninfo_all_blocks=1 00:26:59.746 --rc geninfo_unexecuted_blocks=1 00:26:59.746 00:26:59.746 ' 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:59.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.746 --rc genhtml_branch_coverage=1 00:26:59.746 --rc genhtml_function_coverage=1 00:26:59.746 --rc genhtml_legend=1 00:26:59.746 --rc geninfo_all_blocks=1 00:26:59.746 --rc geninfo_unexecuted_blocks=1 00:26:59.746 00:26:59.746 ' 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:59.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.746 --rc genhtml_branch_coverage=1 00:26:59.746 --rc genhtml_function_coverage=1 00:26:59.746 --rc genhtml_legend=1 00:26:59.746 --rc geninfo_all_blocks=1 00:26:59.746 --rc geninfo_unexecuted_blocks=1 00:26:59.746 00:26:59.746 ' 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.746 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:59.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:26:59.747 09:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:07.894 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.894 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:07.895 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:07.895 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:07.895 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:07.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:27:07.895 00:27:07.895 --- 10.0.0.2 ping statistics --- 00:27:07.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.895 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:27:07.895 00:27:07.895 --- 10.0.0.1 ping statistics --- 00:27:07.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.895 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=449631 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 449631 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 449631 ']' 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:07.895 09:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:07.895 [2024-11-19 09:44:53.741636] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:07.895 [2024-11-19 09:44:53.741698] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.895 [2024-11-19 09:44:53.842510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:07.895 [2024-11-19 09:44:53.893341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.895 [2024-11-19 09:44:53.893391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.895 [2024-11-19 09:44:53.893400] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.895 [2024-11-19 09:44:53.893407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.895 [2024-11-19 09:44:53.893413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.895 [2024-11-19 09:44:53.895220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.895 [2024-11-19 09:44:53.895394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.895 [2024-11-19 09:44:53.895395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.895 09:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:07.895 09:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:07.895 09:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:07.895 09:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:07.895 09:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:07.895 09:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.895 09:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:08.158 [2024-11-19 09:44:54.782781] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.158 09:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:08.420 Malloc0 00:27:08.420 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:08.682 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:08.943 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.943 [2024-11-19 09:44:55.616610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.943 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:09.205 [2024-11-19 09:44:55.817275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:09.205 09:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:09.466 [2024-11-19 09:44:56.005983] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:09.466 09:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:09.466 09:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=449998 00:27:09.466 09:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:09.466 09:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 449998 /var/tmp/bdevperf.sock 00:27:09.466 09:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 449998 ']' 00:27:09.466 09:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:09.466 09:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.466 09:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:09.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:09.466 09:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.466 09:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:10.409 09:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.409 09:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:10.409 09:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:10.671 NVMe0n1 00:27:10.671 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:10.933 00:27:11.193 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=450337 00:27:11.193 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:11.193 09:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:27:12.137 09:44:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:12.137 [2024-11-19 09:44:58.851251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25844f0 is same with the state(6) to be set 00:27:12.137 [2024-11-19 09:44:58.851309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25844f0 is same with the state(6) to be set 00:27:12.137 [2024-11-19 09:44:58.851315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25844f0 is same with the state(6) to be set 00:27:12.137 [2024-11-19 09:44:58.851320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25844f0 is same with the state(6) to be set 00:27:12.137 [2024-11-19 09:44:58.851325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25844f0 is same with the state(6) to be set 00:27:12.137 [2024-11-19 09:44:58.851330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25844f0 is same with the state(6) to be set 00:27:12.137 [2024-11-19 09:44:58.851334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25844f0 is same with the state(6) to be set 00:27:12.137 [2024-11-19 09:44:58.851339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25844f0 is same with the state(6) to be set 00:27:12.398 09:44:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:27:15.699 09:45:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:15.699 00:27:15.699 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:15.699 [2024-11-19 09:45:02.351868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.351996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 [2024-11-19 09:45:02.352135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2585040 is same with the state(6) to be set 00:27:15.699 09:45:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:27:19.000 09:45:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:19.000 [2024-11-19 09:45:05.537082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.000 09:45:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:27:19.942 09:45:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:20.204 [2024-11-19 09:45:06.725335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 [2024-11-19 09:45:06.725590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a4c0 is same with the state(6) to be set 00:27:20.204 09:45:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 450337 00:27:26.793 { 00:27:26.793 "results": [ 00:27:26.793 { 00:27:26.793 "job": "NVMe0n1", 00:27:26.793 "core_mask": "0x1", 00:27:26.793 "workload": "verify", 00:27:26.793 "status": "finished", 00:27:26.793 "verify_range": { 00:27:26.793 "start": 0, 00:27:26.793 "length": 16384 00:27:26.793 }, 00:27:26.793 "queue_depth": 128, 00:27:26.793 "io_size": 4096, 00:27:26.793 "runtime": 15.009308, 00:27:26.793 "iops": 12489.11675341728, 00:27:26.793 "mibps": 48.78561231803625, 00:27:26.793 "io_failed": 8404, 00:27:26.793 "io_timeout": 0, 00:27:26.793 "avg_latency_us": 9788.471404340922, 00:27:26.793 "min_latency_us": 525.6533333333333, 00:27:26.793 "max_latency_us": 18022.4 00:27:26.793 } 00:27:26.793 ], 00:27:26.793 "core_count": 1 00:27:26.793 } 00:27:26.793 09:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 449998 00:27:26.793 09:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 449998 ']' 00:27:26.793 09:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 449998 00:27:26.793 09:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:26.793 09:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.793 09:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 449998 00:27:26.793 09:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:26.793 09:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:26.793 09:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 449998' 00:27:26.793 killing process with pid 449998 00:27:26.793 09:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 449998 00:27:26.793 09:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 449998 00:27:26.793 09:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:26.793 [2024-11-19 09:44:56.088050] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:26.793 [2024-11-19 09:44:56.088122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449998 ] 00:27:26.793 [2024-11-19 09:44:56.179510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.793 [2024-11-19 09:44:56.228806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.793 Running I/O for 15 seconds... 00:27:26.793 10993.00 IOPS, 42.94 MiB/s [2024-11-19T08:45:13.541Z] [2024-11-19 09:44:58.852846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.793 [2024-11-19 09:44:58.852882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.793 [2024-11-19 09:44:58.852899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.793 [2024-11-19 09:44:58.852907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.793 [2024-11-19 09:44:58.852917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.793 [2024-11-19 09:44:58.852925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.793 [2024-11-19 09:44:58.852935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.793 [2024-11-19 09:44:58.852942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.793 [2024-11-19 09:44:58.852952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.793 [2024-11-19 09:44:58.852959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.793 [2024-11-19 09:44:58.852969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.793 [2024-11-19 09:44:58.852976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.793 [2024-11-19 09:44:58.852986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.793 [2024-11-19 09:44:58.852994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.793 [2024-11-19 09:44:58.853003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.793 [2024-11-19 09:44:58.853011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.793 [2024-11-19 09:44:58.853021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.793 [2024-11-19 09:44:58.853029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.793 [2024-11-19 09:44:58.853038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.793 [2024-11-19 09:44:58.853045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.793 [2024-11-19 09:44:58.853055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.793 [2024-11-19 09:44:58.853063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.793 [2024-11-19 09:44:58.853080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.793 [2024-11-19 09:44:58.853089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.793 [2024-11-19 09:44:58.853099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.793 [2024-11-19 09:44:58.853109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.793 [2024-11-19 09:44:58.853119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.793 [2024-11-19 09:44:58.853127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.793 [2024-11-19 09:44:58.853138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.794 [2024-11-19 09:44:58.853145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.794 [2024-11-19 09:44:58.853167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.794 [2024-11-19 09:44:58.853185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.794 [2024-11-19 09:44:58.853204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.794 [2024-11-19 09:44:58.853722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.794 [2024-11-19 09:44:58.853729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.853738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.853746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.853756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.853764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.853774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.853782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.853791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.853798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.853807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.853815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.853824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.853832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.853841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.853848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.853857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.853865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.853874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.853882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.853891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.853899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.853908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.853915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.853924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.853932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.853942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.853949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.853959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.853968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.853977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.853984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.853993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.854002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.854018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.854035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.795 [2024-11-19 09:44:58.854053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.795 [2024-11-19 09:44:58.854070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.795 [2024-11-19 09:44:58.854087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.795 [2024-11-19 09:44:58.854104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.795 [2024-11-19 09:44:58.854121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.795 [2024-11-19 09:44:58.854138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.795 [2024-11-19 09:44:58.854155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.854177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.854196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.854212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.854230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.854246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.854263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.854280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.854297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.795 [2024-11-19 09:44:58.854306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.795 [2024-11-19 09:44:58.854313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.796 [2024-11-19 09:44:58.854713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.796 [2024-11-19 09:44:58.854743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95608 len:8 PRP1 0x0 PRP2 0x0 00:27:26.796 [2024-11-19 09:44:58.854751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.796 [2024-11-19 09:44:58.854769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.796 [2024-11-19 09:44:58.854775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95616 len:8 PRP1 0x0 PRP2 0x0 00:27:26.796 [2024-11-19 09:44:58.854782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.796 [2024-11-19 09:44:58.854796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.796 [2024-11-19 09:44:58.854802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95624 len:8 PRP1 0x0 PRP2 0x0 00:27:26.796 [2024-11-19 09:44:58.854810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.796 [2024-11-19 09:44:58.854818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.796 [2024-11-19 09:44:58.854823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.796 [2024-11-19 09:44:58.854830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95632 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.854837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.854844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.797 [2024-11-19 09:44:58.854850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.797 [2024-11-19 09:44:58.854858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95640 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.854866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.854874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.797 [2024-11-19 09:44:58.854879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.797 [2024-11-19 09:44:58.854886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95648 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.854893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.854900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.797 [2024-11-19 09:44:58.854906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.797 [2024-11-19 09:44:58.854912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95656 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.854920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.854928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.797 [2024-11-19 09:44:58.854933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.797 [2024-11-19 09:44:58.854939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95664 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.854946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.854954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.797 [2024-11-19 09:44:58.854960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.797 [2024-11-19 09:44:58.854966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95672 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.854973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.854981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.797 [2024-11-19 09:44:58.854987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.797 [2024-11-19 09:44:58.854993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95680 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.854999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.855007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.797 [2024-11-19 09:44:58.855013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.797 [2024-11-19 09:44:58.855020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95688 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.855027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.855035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.797 [2024-11-19 09:44:58.855040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.797 [2024-11-19 09:44:58.855047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95696 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.855054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.855063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.797 [2024-11-19 09:44:58.855069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.797 [2024-11-19 09:44:58.855079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95704 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.855087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.855094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.797 [2024-11-19 09:44:58.855100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.797 [2024-11-19 09:44:58.855106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95712 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.855113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.855121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.797 [2024-11-19 09:44:58.855127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.797 [2024-11-19 09:44:58.855133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95720 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.855140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.855147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.797 [2024-11-19 09:44:58.855153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.797 [2024-11-19 09:44:58.855164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95728 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.855171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.855180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.797 [2024-11-19 09:44:58.855187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.797 [2024-11-19 09:44:58.855193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95736 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.855200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.855207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.797 [2024-11-19 09:44:58.855213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.797 [2024-11-19 09:44:58.855219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95744 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.855226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.855233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.797 [2024-11-19 09:44:58.855239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.797 [2024-11-19 09:44:58.855246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95752 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.855253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.855261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.797 [2024-11-19 09:44:58.855267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.797 [2024-11-19 09:44:58.855273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95760 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.855280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.855289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.797 [2024-11-19 09:44:58.855295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.797 [2024-11-19 09:44:58.855301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95768 len:8 PRP1 0x0 PRP2 0x0 00:27:26.797 [2024-11-19 09:44:58.855309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.797 [2024-11-19 09:44:58.855316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.798 [2024-11-19 09:44:58.855322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.798 [2024-11-19 09:44:58.855328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95776 len:8 PRP1 0x0 PRP2 0x0 00:27:26.798 [2024-11-19 09:44:58.855335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:44:58.855342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.798 [2024-11-19 09:44:58.855349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.798 [2024-11-19 09:44:58.855355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95784 len:8 PRP1 0x0 PRP2 0x0 00:27:26.798 [2024-11-19 09:44:58.855363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:44:58.855406] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:26.798 [2024-11-19 09:44:58.855429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.798 [2024-11-19 09:44:58.855437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:44:58.855446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.798 [2024-11-19 09:44:58.855453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:44:58.855462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.798 [2024-11-19 09:44:58.855470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:44:58.855478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.798 [2024-11-19 09:44:58.855485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:44:58.855492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:26.798 [2024-11-19 09:44:58.855518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1615d70 (9): Bad file descriptor 00:27:26.798 [2024-11-19 09:44:58.859018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:26.798 [2024-11-19 09:44:58.885347] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:27:26.798 11082.50 IOPS, 43.29 MiB/s [2024-11-19T08:45:13.546Z] 11378.33 IOPS, 44.45 MiB/s [2024-11-19T08:45:13.546Z] 11782.50 IOPS, 46.03 MiB/s [2024-11-19T08:45:13.546Z] [2024-11-19 09:45:02.353344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.798 [2024-11-19 09:45:02.353612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.798 [2024-11-19 09:45:02.353618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.353987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.799 [2024-11-19 09:45:02.353994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.799 [2024-11-19 09:45:02.354488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.799 [2024-11-19 09:45:02.354495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:02.354881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.800 [2024-11-19 09:45:02.354906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.800 [2024-11-19 09:45:02.354911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51352 len:8 PRP1 0x0 PRP2 0x0 00:27:26.800 [2024-11-19 09:45:02.354918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354951] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:26.800 [2024-11-19 09:45:02.354967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.800 [2024-11-19 09:45:02.354973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.800 [2024-11-19 09:45:02.354985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.354993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.800 [2024-11-19 09:45:02.354999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.355004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.800 [2024-11-19 09:45:02.355009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:02.355014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:27:26.800 [2024-11-19 09:45:02.357440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:27:26.800 [2024-11-19 09:45:02.357460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1615d70 (9): Bad file descriptor 00:27:26.800 [2024-11-19 09:45:02.382829] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:27:26.800 11922.40 IOPS, 46.57 MiB/s [2024-11-19T08:45:13.548Z] 12121.17 IOPS, 47.35 MiB/s [2024-11-19T08:45:13.548Z] 12238.43 IOPS, 47.81 MiB/s [2024-11-19T08:45:13.548Z] 12318.25 IOPS, 48.12 MiB/s [2024-11-19T08:45:13.548Z] [2024-11-19 09:45:06.726941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.800 [2024-11-19 09:45:06.726972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.726985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.726990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.726997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:119648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:119696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:119752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:119776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:119784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.800 [2024-11-19 09:45:06.727361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:119816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.800 [2024-11-19 09:45:06.727367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:119864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:119912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:119960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.801 [2024-11-19 09:45:06.727864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.801 [2024-11-19 09:45:06.727875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.801 [2024-11-19 09:45:06.727887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.801 [2024-11-19 09:45:06.727898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.801 [2024-11-19 09:45:06.727910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.801 [2024-11-19 09:45:06.727921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.801 [2024-11-19 09:45:06.727934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:26.801 [2024-11-19 09:45:06.727969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.727986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.801 [2024-11-19 09:45:06.727992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120176 len:8 PRP1 0x0 PRP2 0x0 00:27:26.801 [2024-11-19 09:45:06.727998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.728005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.801 [2024-11-19 09:45:06.728009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.801 [2024-11-19 09:45:06.728013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120184 len:8 PRP1 0x0 PRP2 0x0 00:27:26.801 [2024-11-19 09:45:06.728018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.728024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.801 [2024-11-19 09:45:06.728028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.801 [2024-11-19 09:45:06.728032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120192 len:8 PRP1 0x0 PRP2 0x0 00:27:26.801 [2024-11-19 09:45:06.728037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.728045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.801 [2024-11-19 09:45:06.728050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.801 [2024-11-19 09:45:06.728055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120200 len:8 PRP1 0x0 PRP2 0x0 00:27:26.801 [2024-11-19 09:45:06.728060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.728066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.801 [2024-11-19 09:45:06.728070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.801 [2024-11-19 09:45:06.728074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120208 len:8 PRP1 0x0 PRP2 0x0 00:27:26.801 [2024-11-19 09:45:06.728079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.801 [2024-11-19 09:45:06.728084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120216 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120224 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120232 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120240 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120248 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120256 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120264 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120272 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120280 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120288 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120296 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120304 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120312 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120320 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120328 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120336 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120344 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120352 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120360 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120368 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120376 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120384 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.728526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.728530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120392 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.728536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.728542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.738985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.739015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120400 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.739032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.739046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.739053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.739061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120408 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.739068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.739076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.739081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.739087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120416 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.739093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.739100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.739106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.739113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120424 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.739120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.739127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.739132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.739138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120432 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.739146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.739153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.739166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.739172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120440 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.739178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.739185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.739191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.739197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120448 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.739204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.739212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.739217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.739223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120456 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.739231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.739239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.739246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.739252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119504 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.739259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.739266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.739272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.739278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119512 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.739285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.739292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.739297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.739305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119520 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.739312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.739319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.739325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.739331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119528 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.739338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.739346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.739351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.739357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119536 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.739364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.739371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.739377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.739383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119544 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.739391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.739398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.739403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.739409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119552 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.739416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.739424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:26.802 [2024-11-19 09:45:06.739429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:26.802 [2024-11-19 09:45:06.739435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119560 len:8 PRP1 0x0 PRP2 0x0 00:27:26.802 [2024-11-19 09:45:06.739441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.739486] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:26.802 [2024-11-19 09:45:06.739514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.802 [2024-11-19 09:45:06.739523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.802 [2024-11-19 09:45:06.739532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.802 [2024-11-19 09:45:06.739540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.803 [2024-11-19 09:45:06.739547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.803 [2024-11-19 09:45:06.739554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.803 [2024-11-19 09:45:06.739562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.803 [2024-11-19 09:45:06.739569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.803 [2024-11-19 09:45:06.739577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:26.803 [2024-11-19 09:45:06.739616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1615d70 (9): Bad file descriptor 00:27:26.803 [2024-11-19 09:45:06.742852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:26.803 12272.44 IOPS, 47.94 MiB/s [2024-11-19T08:45:13.551Z] [2024-11-19 09:45:06.845188] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:27:26.803 12283.20 IOPS, 47.98 MiB/s [2024-11-19T08:45:13.551Z] 12327.09 IOPS, 48.15 MiB/s [2024-11-19T08:45:13.551Z] 12386.92 IOPS, 48.39 MiB/s [2024-11-19T08:45:13.551Z] 12425.38 IOPS, 48.54 MiB/s [2024-11-19T08:45:13.551Z] 12457.57 IOPS, 48.66 MiB/s [2024-11-19T08:45:13.551Z] 12488.33 IOPS, 48.78 MiB/s 00:27:26.803 Latency(us) 00:27:26.803 [2024-11-19T08:45:13.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.803 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:26.803 Verification LBA range: start 0x0 length 0x4000 00:27:26.803 NVMe0n1 : 15.01 12489.12 48.79 559.92 0.00 9788.47 525.65 18022.40 00:27:26.803 [2024-11-19T08:45:13.551Z] =================================================================================================================== 00:27:26.803 [2024-11-19T08:45:13.551Z] Total : 12489.12 48.79 559.92 0.00 9788.47 525.65 18022.40 00:27:26.803 Received shutdown signal, test time was about 15.000000 seconds 00:27:26.803 00:27:26.803 Latency(us) 00:27:26.803 [2024-11-19T08:45:13.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.803 [2024-11-19T08:45:13.551Z] =================================================================================================================== 00:27:26.803 [2024-11-19T08:45:13.551Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:26.803 09:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:26.803 09:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:26.803 09:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:26.803 09:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=453912 00:27:26.803 09:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 453912 /var/tmp/bdevperf.sock 00:27:26.803 09:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:26.803 09:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 453912 ']' 00:27:26.803 09:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:26.803 09:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.803 09:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:26.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:26.803 09:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.803 09:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:27.375 09:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:27.375 09:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:27.375 09:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:27.375 [2024-11-19 09:45:14.025235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:27.375 09:45:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:27.636 [2024-11-19 09:45:14.209699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:27.636 09:45:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:27.898 NVMe0n1 00:27:27.898 09:45:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:28.159 00:27:28.159 09:45:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:28.420 00:27:28.420 09:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:28.420 09:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:28.681 09:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:28.941 09:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:32.238 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:32.238 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:32.238 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=454932 00:27:32.238 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:32.238 09:45:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 454932 00:27:33.180 { 00:27:33.180 "results": [ 00:27:33.180 { 00:27:33.180 "job": "NVMe0n1", 00:27:33.180 "core_mask": "0x1", 00:27:33.180 "workload": "verify", 00:27:33.180 "status": "finished", 00:27:33.180 "verify_range": { 00:27:33.180 "start": 0, 00:27:33.180 "length": 16384 00:27:33.180 }, 00:27:33.180 "queue_depth": 128, 00:27:33.180 "io_size": 4096, 00:27:33.180 "runtime": 1.005753, 00:27:33.180 "iops": 12709.880060014735, 00:27:33.180 "mibps": 49.64796898443256, 00:27:33.180 "io_failed": 0, 00:27:33.180 "io_timeout": 0, 00:27:33.180 "avg_latency_us": 10035.868354324753, 00:27:33.180 "min_latency_us": 1597.44, 00:27:33.180 "max_latency_us": 14199.466666666667 00:27:33.180 } 00:27:33.180 ], 00:27:33.180 "core_count": 1 00:27:33.180 } 00:27:33.180 09:45:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:33.180 [2024-11-19 09:45:13.075888] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:33.180 [2024-11-19 09:45:13.075945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453912 ] 00:27:33.180 [2024-11-19 09:45:13.157948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.180 [2024-11-19 09:45:13.187387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.180 [2024-11-19 09:45:15.439663] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:33.180 [2024-11-19 09:45:15.439700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.180 [2024-11-19 09:45:15.439709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.180 [2024-11-19 09:45:15.439716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.180 [2024-11-19 09:45:15.439722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.180 [2024-11-19 09:45:15.439727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.180 [2024-11-19 09:45:15.439732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.180 [2024-11-19 09:45:15.439738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.180 [2024-11-19 09:45:15.439743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.180 [2024-11-19 09:45:15.439749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:27:33.180 [2024-11-19 09:45:15.439767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:27:33.180 [2024-11-19 09:45:15.439778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x815d70 (9): Bad file descriptor 00:27:33.180 [2024-11-19 09:45:15.534316] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:27:33.180 Running I/O for 1 seconds... 00:27:33.180 12655.00 IOPS, 49.43 MiB/s 00:27:33.180 Latency(us) 00:27:33.180 [2024-11-19T08:45:19.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.180 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:33.180 Verification LBA range: start 0x0 length 0x4000 00:27:33.180 NVMe0n1 : 1.01 12709.88 49.65 0.00 0.00 10035.87 1597.44 14199.47 00:27:33.180 [2024-11-19T08:45:19.928Z] =================================================================================================================== 00:27:33.180 [2024-11-19T08:45:19.928Z] Total : 12709.88 49.65 0.00 0.00 10035.87 1597.44 14199.47 00:27:33.180 09:45:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:33.180 09:45:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:33.440 09:45:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:33.440 09:45:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:33.440 09:45:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:33.701 09:45:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:33.961 09:45:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:37.260 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:37.260 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:37.260 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 453912 00:27:37.260 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 453912 ']' 00:27:37.260 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 453912 00:27:37.260 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:37.260 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:37.260 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 453912 00:27:37.260 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:37.260 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:37.260 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 453912' 00:27:37.260 killing process with pid 453912 00:27:37.261 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 453912 00:27:37.261 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 453912 00:27:37.261 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:37.261 09:45:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:37.521 rmmod nvme_tcp 00:27:37.521 rmmod nvme_fabrics 00:27:37.521 rmmod nvme_keyring 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 449631 ']' 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 449631 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 449631 ']' 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 449631 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 449631 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 449631' 00:27:37.521 killing process with pid 449631 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 449631 00:27:37.521 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 449631 00:27:37.781 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:37.781 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:37.781 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:37.781 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:27:37.781 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:27:37.781 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:37.781 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:27:37.781 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:37.781 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:37.781 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.781 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.781 09:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.694 09:45:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:39.694 00:27:39.694 real 0m40.430s 00:27:39.694 user 2m4.349s 00:27:39.694 sys 0m8.821s 00:27:39.694 09:45:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:39.694 09:45:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:39.694 ************************************ 00:27:39.694 END TEST nvmf_failover 00:27:39.694 ************************************ 00:27:39.694 09:45:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:39.694 09:45:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:39.694 09:45:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:39.694 09:45:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.956 ************************************ 00:27:39.956 START TEST nvmf_host_discovery 00:27:39.956 ************************************ 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:39.956 * Looking for test storage... 00:27:39.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:39.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.956 --rc genhtml_branch_coverage=1 00:27:39.956 --rc genhtml_function_coverage=1 00:27:39.956 --rc genhtml_legend=1 00:27:39.956 --rc geninfo_all_blocks=1 00:27:39.956 --rc geninfo_unexecuted_blocks=1 00:27:39.956 00:27:39.956 ' 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:39.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.956 --rc genhtml_branch_coverage=1 00:27:39.956 --rc genhtml_function_coverage=1 00:27:39.956 --rc genhtml_legend=1 00:27:39.956 --rc geninfo_all_blocks=1 00:27:39.956 --rc geninfo_unexecuted_blocks=1 00:27:39.956 00:27:39.956 ' 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:39.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.956 --rc genhtml_branch_coverage=1 00:27:39.956 --rc genhtml_function_coverage=1 00:27:39.956 --rc genhtml_legend=1 00:27:39.956 --rc geninfo_all_blocks=1 00:27:39.956 --rc geninfo_unexecuted_blocks=1 00:27:39.956 00:27:39.956 ' 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:39.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.956 --rc genhtml_branch_coverage=1 00:27:39.956 --rc genhtml_function_coverage=1 00:27:39.956 --rc genhtml_legend=1 00:27:39.956 --rc geninfo_all_blocks=1 00:27:39.956 --rc geninfo_unexecuted_blocks=1 00:27:39.956 00:27:39.956 ' 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.956 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:40.217 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:40.217 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:40.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:27:40.218 09:45:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:48.362 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:48.362 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:27:48.362 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:48.362 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:48.362 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:48.362 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:48.362 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:48.362 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:27:48.362 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:48.362 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:48.363 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:48.363 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:48.363 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:48.363 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:48.363 09:45:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:48.363 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:48.363 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:48.363 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:48.363 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:48.363 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:48.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:27:48.363 00:27:48.363 --- 10.0.0.2 ping statistics --- 00:27:48.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.363 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:27:48.363 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:48.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:27:48.363 00:27:48.363 --- 10.0.0.1 ping statistics --- 00:27:48.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.363 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:27:48.363 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.363 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:27:48.363 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:48.363 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.363 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:48.363 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:48.363 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.363 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:48.363 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:48.363 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:48.363 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:48.364 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:48.364 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:48.364 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=460270 00:27:48.364 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 460270 00:27:48.364 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:48.364 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 460270 ']' 00:27:48.364 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.364 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:48.364 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.364 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:48.364 09:45:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:48.364 [2024-11-19 09:45:34.242112] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:48.364 [2024-11-19 09:45:34.242206] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:48.364 [2024-11-19 09:45:34.341660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.364 [2024-11-19 09:45:34.391911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:48.364 [2024-11-19 09:45:34.391959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:48.364 [2024-11-19 09:45:34.391968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:48.364 [2024-11-19 09:45:34.391975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:48.364 [2024-11-19 09:45:34.391981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:48.364 [2024-11-19 09:45:34.392732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.364 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:48.364 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:48.364 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:48.364 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:48.364 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:48.364 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:48.364 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:48.364 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.364 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:48.364 [2024-11-19 09:45:35.101903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.625 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.625 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:48.625 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.625 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:48.625 [2024-11-19 09:45:35.114145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:48.625 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:48.626 null0 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:48.626 null1 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=460316 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 460316 /tmp/host.sock 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 460316 ']' 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:48.626 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:48.626 09:45:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:48.626 [2024-11-19 09:45:35.218470] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:48.626 [2024-11-19 09:45:35.218535] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460316 ] 00:27:48.626 [2024-11-19 09:45:35.311384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.626 [2024-11-19 09:45:35.364655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:49.569 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:49.570 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:49.831 [2024-11-19 09:45:36.365219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:49.831 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:27:49.832 09:45:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:50.403 [2024-11-19 09:45:37.049665] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:50.403 [2024-11-19 09:45:37.049686] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:50.403 [2024-11-19 09:45:37.049699] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:50.403 [2024-11-19 09:45:37.136965] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:50.665 [2024-11-19 09:45:37.238961] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:50.665 [2024-11-19 09:45:37.239907] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1330780:1 started. 00:27:50.665 [2024-11-19 09:45:37.241512] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:50.665 [2024-11-19 09:45:37.241530] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:50.665 [2024-11-19 09:45:37.249393] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1330780 was disconnected and freed. delete nvme_qpair. 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:50.926 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:51.188 09:45:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:51.449 [2024-11-19 09:45:37.997237] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1330b20:1 started. 00:27:51.449 [2024-11-19 09:45:38.001189] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1330b20 was disconnected and freed. delete nvme_qpair. 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:51.449 [2024-11-19 09:45:38.089793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:51.449 [2024-11-19 09:45:38.090844] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:51.449 [2024-11-19 09:45:38.090867] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.449 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:51.450 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:51.450 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:51.450 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:51.450 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:51.450 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:51.450 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:51.450 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:51.450 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:51.450 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.450 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:51.450 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:51.450 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:51.450 [2024-11-19 09:45:38.179577] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:51.450 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.710 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:51.710 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:51.710 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:51.710 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:51.710 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:51.710 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:51.710 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:51.710 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:51.710 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:51.710 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.710 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:51.710 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:51.710 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:51.710 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:51.710 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.710 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:51.710 09:45:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:51.710 [2024-11-19 09:45:38.447071] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:27:51.710 [2024-11-19 09:45:38.447107] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:51.710 [2024-11-19 09:45:38.447116] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:51.711 [2024-11-19 09:45:38.447121] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:52.653 [2024-11-19 09:45:39.365518] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:52.653 [2024-11-19 09:45:39.365534] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.653 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:52.654 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:52.654 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:52.654 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:52.654 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:52.654 [2024-11-19 09:45:39.371844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.654 [2024-11-19 09:45:39.371859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.654 [2024-11-19 09:45:39.371866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.654 [2024-11-19 09:45:39.371871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.654 [2024-11-19 09:45:39.371877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.654 [2024-11-19 09:45:39.371883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.654 [2024-11-19 09:45:39.371889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.654 [2024-11-19 09:45:39.371894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.654 [2024-11-19 09:45:39.371899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e10 is same with the state(6) to be set 00:27:52.654 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:52.654 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:52.654 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:52.654 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.654 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:52.654 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:52.654 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:52.654 [2024-11-19 09:45:39.381859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1300e10 (9): Bad file descriptor 00:27:52.654 [2024-11-19 09:45:39.391894] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:52.654 [2024-11-19 09:45:39.391903] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:52.654 [2024-11-19 09:45:39.391907] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:52.654 [2024-11-19 09:45:39.391911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:52.654 [2024-11-19 09:45:39.391924] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:52.654 [2024-11-19 09:45:39.392184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.654 [2024-11-19 09:45:39.392196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1300e10 with addr=10.0.0.2, port=4420 00:27:52.654 [2024-11-19 09:45:39.392202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e10 is same with the state(6) to be set 00:27:52.654 [2024-11-19 09:45:39.392212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1300e10 (9): Bad file descriptor 00:27:52.654 [2024-11-19 09:45:39.392220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:52.654 [2024-11-19 09:45:39.392225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:52.654 [2024-11-19 09:45:39.392232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:52.654 [2024-11-19 09:45:39.392237] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:52.654 [2024-11-19 09:45:39.392241] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:52.654 [2024-11-19 09:45:39.392244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:52.654 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.916 [2024-11-19 09:45:39.401953] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:52.916 [2024-11-19 09:45:39.401963] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:52.916 [2024-11-19 09:45:39.401966] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:52.916 [2024-11-19 09:45:39.401969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:52.916 [2024-11-19 09:45:39.401980] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:52.916 [2024-11-19 09:45:39.402376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.916 [2024-11-19 09:45:39.402408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1300e10 with addr=10.0.0.2, port=4420 00:27:52.916 [2024-11-19 09:45:39.402417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e10 is same with the state(6) to be set 00:27:52.916 [2024-11-19 09:45:39.402431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1300e10 (9): Bad file descriptor 00:27:52.916 [2024-11-19 09:45:39.402440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:52.916 [2024-11-19 09:45:39.402445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:52.916 [2024-11-19 09:45:39.402451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:52.916 [2024-11-19 09:45:39.402460] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:52.916 [2024-11-19 09:45:39.402464] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:52.916 [2024-11-19 09:45:39.402467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:52.916 [2024-11-19 09:45:39.412010] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:52.916 [2024-11-19 09:45:39.412022] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:52.916 [2024-11-19 09:45:39.412026] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:52.916 [2024-11-19 09:45:39.412029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:52.916 [2024-11-19 09:45:39.412042] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:52.916 [2024-11-19 09:45:39.412429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.916 [2024-11-19 09:45:39.412460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1300e10 with addr=10.0.0.2, port=4420 00:27:52.916 [2024-11-19 09:45:39.412469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e10 is same with the state(6) to be set 00:27:52.916 [2024-11-19 09:45:39.412484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1300e10 (9): Bad file descriptor 00:27:52.916 [2024-11-19 09:45:39.412493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:52.916 [2024-11-19 09:45:39.412499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:52.916 [2024-11-19 09:45:39.412504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:52.916 [2024-11-19 09:45:39.412510] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:52.916 [2024-11-19 09:45:39.412514] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:52.916 [2024-11-19 09:45:39.412517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:52.916 [2024-11-19 09:45:39.422072] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:52.916 [2024-11-19 09:45:39.422082] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:52.916 [2024-11-19 09:45:39.422086] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:52.916 [2024-11-19 09:45:39.422089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:52.916 [2024-11-19 09:45:39.422102] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:52.916 [2024-11-19 09:45:39.422446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.916 [2024-11-19 09:45:39.422457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1300e10 with addr=10.0.0.2, port=4420 00:27:52.916 [2024-11-19 09:45:39.422462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e10 is same with the state(6) to be set 00:27:52.917 [2024-11-19 09:45:39.422471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1300e10 (9): Bad file descriptor 00:27:52.917 [2024-11-19 09:45:39.422479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:52.917 [2024-11-19 09:45:39.422483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:52.917 [2024-11-19 09:45:39.422492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:52.917 [2024-11-19 09:45:39.422497] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:52.917 [2024-11-19 09:45:39.422500] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:52.917 [2024-11-19 09:45:39.422503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:52.917 [2024-11-19 09:45:39.432131] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:52.917 [2024-11-19 09:45:39.432141] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:52.917 [2024-11-19 09:45:39.432145] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:52.917 [2024-11-19 09:45:39.432148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:52.917 [2024-11-19 09:45:39.432161] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:52.917 [2024-11-19 09:45:39.432459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.917 [2024-11-19 09:45:39.432469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1300e10 with addr=10.0.0.2, port=4420 00:27:52.917 [2024-11-19 09:45:39.432475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e10 is same with the state(6) to be set 00:27:52.917 [2024-11-19 09:45:39.432483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1300e10 (9): Bad file descriptor 00:27:52.917 [2024-11-19 09:45:39.432490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:52.917 [2024-11-19 09:45:39.432495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:52.917 [2024-11-19 09:45:39.432500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:52.917 [2024-11-19 09:45:39.432504] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:52.917 [2024-11-19 09:45:39.432508] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:52.917 [2024-11-19 09:45:39.432516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:52.917 [2024-11-19 09:45:39.442190] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:52.917 [2024-11-19 09:45:39.442201] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:52.917 [2024-11-19 09:45:39.442204] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:52.917 [2024-11-19 09:45:39.442207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:52.917 [2024-11-19 09:45:39.442219] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:52.917 [2024-11-19 09:45:39.442418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.917 [2024-11-19 09:45:39.442429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1300e10 with addr=10.0.0.2, port=4420 00:27:52.917 [2024-11-19 09:45:39.442435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300e10 is same with the state(6) to be set 00:27:52.917 [2024-11-19 09:45:39.442443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1300e10 (9): Bad file descriptor 00:27:52.917 [2024-11-19 09:45:39.442450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:52.917 [2024-11-19 09:45:39.442455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:52.917 [2024-11-19 09:45:39.442460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:52.917 [2024-11-19 09:45:39.442465] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:52.917 [2024-11-19 09:45:39.442468] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:52.917 [2024-11-19 09:45:39.442471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:52.917 [2024-11-19 09:45:39.451786] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:52.917 [2024-11-19 09:45:39.451799] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:52.917 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:52.918 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.179 09:45:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:54.121 [2024-11-19 09:45:40.770276] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:54.121 [2024-11-19 09:45:40.770292] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:54.121 [2024-11-19 09:45:40.770301] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:54.121 [2024-11-19 09:45:40.858544] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:54.693 [2024-11-19 09:45:41.126827] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:27:54.693 [2024-11-19 09:45:41.127504] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1311eb0:1 started. 00:27:54.693 [2024-11-19 09:45:41.128865] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:54.693 [2024-11-19 09:45:41.128888] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:54.693 [2024-11-19 09:45:41.138937] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1311eb0 was disconnected and freed. delete nvme_qpair. 00:27:54.693 request: 00:27:54.693 { 00:27:54.693 "name": "nvme", 00:27:54.693 "trtype": "tcp", 00:27:54.693 "traddr": "10.0.0.2", 00:27:54.693 "adrfam": "ipv4", 00:27:54.693 "trsvcid": "8009", 00:27:54.693 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:54.693 "wait_for_attach": true, 00:27:54.693 "method": "bdev_nvme_start_discovery", 00:27:54.693 "req_id": 1 00:27:54.693 } 00:27:54.693 Got JSON-RPC error response 00:27:54.693 response: 00:27:54.693 { 00:27:54.693 "code": -17, 00:27:54.693 "message": "File exists" 00:27:54.693 } 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.693 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:54.693 request: 00:27:54.693 { 00:27:54.693 "name": "nvme_second", 00:27:54.693 "trtype": "tcp", 00:27:54.693 "traddr": "10.0.0.2", 00:27:54.693 "adrfam": "ipv4", 00:27:54.693 "trsvcid": "8009", 00:27:54.693 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:54.693 "wait_for_attach": true, 00:27:54.693 "method": "bdev_nvme_start_discovery", 00:27:54.693 "req_id": 1 00:27:54.693 } 00:27:54.693 Got JSON-RPC error response 00:27:54.693 response: 00:27:54.693 { 00:27:54.694 "code": -17, 00:27:54.694 "message": "File exists" 00:27:54.694 } 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.694 09:45:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:56.078 [2024-11-19 09:45:42.393779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.078 [2024-11-19 09:45:42.393802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131ac80 with addr=10.0.0.2, port=8010 00:27:56.078 [2024-11-19 09:45:42.393812] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:56.078 [2024-11-19 09:45:42.393818] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:56.078 [2024-11-19 09:45:42.393823] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:57.019 [2024-11-19 09:45:43.396122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.019 [2024-11-19 09:45:43.396140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131ac80 with addr=10.0.0.2, port=8010 00:27:57.019 [2024-11-19 09:45:43.396148] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:57.019 [2024-11-19 09:45:43.396153] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:57.019 [2024-11-19 09:45:43.396161] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:57.962 [2024-11-19 09:45:44.398131] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:57.962 request: 00:27:57.962 { 00:27:57.962 "name": "nvme_second", 00:27:57.962 "trtype": "tcp", 00:27:57.962 "traddr": "10.0.0.2", 00:27:57.962 "adrfam": "ipv4", 00:27:57.962 "trsvcid": "8010", 00:27:57.962 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:57.962 "wait_for_attach": false, 00:27:57.962 "attach_timeout_ms": 3000, 00:27:57.962 "method": "bdev_nvme_start_discovery", 00:27:57.962 "req_id": 1 00:27:57.962 } 00:27:57.962 Got JSON-RPC error response 00:27:57.962 response: 00:27:57.962 { 00:27:57.962 "code": -110, 00:27:57.962 "message": "Connection timed out" 00:27:57.962 } 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 460316 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:57.962 rmmod nvme_tcp 00:27:57.962 rmmod nvme_fabrics 00:27:57.962 rmmod nvme_keyring 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 460270 ']' 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 460270 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 460270 ']' 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 460270 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 460270 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 460270' 00:27:57.962 killing process with pid 460270 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 460270 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 460270 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:57.962 09:45:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.506 09:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:00.506 00:28:00.506 real 0m20.294s 00:28:00.506 user 0m23.541s 00:28:00.506 sys 0m7.243s 00:28:00.506 09:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:00.506 09:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:00.506 ************************************ 00:28:00.506 END TEST nvmf_host_discovery 00:28:00.506 ************************************ 00:28:00.507 09:45:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:00.507 09:45:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:00.507 09:45:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:00.507 09:45:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.507 ************************************ 00:28:00.507 START TEST nvmf_host_multipath_status 00:28:00.507 ************************************ 00:28:00.507 09:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:00.507 * Looking for test storage... 00:28:00.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:00.507 09:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:00.507 09:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:28:00.507 09:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:00.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.507 --rc genhtml_branch_coverage=1 00:28:00.507 --rc genhtml_function_coverage=1 00:28:00.507 --rc genhtml_legend=1 00:28:00.507 --rc geninfo_all_blocks=1 00:28:00.507 --rc geninfo_unexecuted_blocks=1 00:28:00.507 00:28:00.507 ' 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:00.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.507 --rc genhtml_branch_coverage=1 00:28:00.507 --rc genhtml_function_coverage=1 00:28:00.507 --rc genhtml_legend=1 00:28:00.507 --rc geninfo_all_blocks=1 00:28:00.507 --rc geninfo_unexecuted_blocks=1 00:28:00.507 00:28:00.507 ' 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:00.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.507 --rc genhtml_branch_coverage=1 00:28:00.507 --rc genhtml_function_coverage=1 00:28:00.507 --rc genhtml_legend=1 00:28:00.507 --rc geninfo_all_blocks=1 00:28:00.507 --rc geninfo_unexecuted_blocks=1 00:28:00.507 00:28:00.507 ' 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:00.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.507 --rc genhtml_branch_coverage=1 00:28:00.507 --rc genhtml_function_coverage=1 00:28:00.507 --rc genhtml_legend=1 00:28:00.507 --rc geninfo_all_blocks=1 00:28:00.507 --rc geninfo_unexecuted_blocks=1 00:28:00.507 00:28:00.507 ' 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.507 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:00.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:28:00.508 09:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.661 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:08.662 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:08.662 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:08.662 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:08.662 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:08.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:28:08.662 00:28:08.662 --- 10.0.0.2 ping statistics --- 00:28:08.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.662 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:08.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:28:08.662 00:28:08.662 --- 10.0.0.1 ping statistics --- 00:28:08.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.662 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=466480 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 466480 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 466480 ']' 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:08.662 09:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:08.662 [2024-11-19 09:45:54.689741] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:08.663 [2024-11-19 09:45:54.689808] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.663 [2024-11-19 09:45:54.788362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:08.663 [2024-11-19 09:45:54.840340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.663 [2024-11-19 09:45:54.840396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.663 [2024-11-19 09:45:54.840406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.663 [2024-11-19 09:45:54.840413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.663 [2024-11-19 09:45:54.840420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.663 [2024-11-19 09:45:54.842072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.663 [2024-11-19 09:45:54.842080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.924 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:08.924 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:28:08.924 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:08.924 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:08.924 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:08.924 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.924 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=466480 00:28:08.924 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:09.185 [2024-11-19 09:45:55.725135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.185 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:09.446 Malloc0 00:28:09.446 09:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:28:09.446 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:09.706 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:09.967 [2024-11-19 09:45:56.538409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.967 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:10.228 [2024-11-19 09:45:56.734932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:10.228 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:10.228 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=466871 00:28:10.228 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:10.228 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 466871 /var/tmp/bdevperf.sock 00:28:10.228 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 466871 ']' 00:28:10.228 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:10.228 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:10.228 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:10.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:10.228 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:10.228 09:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:11.172 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:11.172 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:28:11.172 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:11.172 09:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:11.742 Nvme0n1 00:28:11.742 09:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:12.003 Nvme0n1 00:28:12.003 09:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:28:12.003 09:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:14.554 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:28:14.554 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:14.554 09:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:14.554 09:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:28:15.494 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:28:15.494 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:15.494 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:15.494 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:15.754 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:15.754 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:15.754 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:15.755 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:15.755 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:15.755 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:15.755 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:15.755 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:16.015 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.015 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:16.015 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.015 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:16.275 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.275 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:16.275 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.275 09:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:16.535 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.535 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:16.535 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.535 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:16.535 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.535 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:28:16.535 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:16.795 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:17.057 09:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:28:17.999 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:28:17.999 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:17.999 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:17.999 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:18.259 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:18.259 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:18.259 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.259 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:18.259 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:18.259 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:18.260 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.260 09:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:18.520 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:18.520 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:18.520 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.520 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:18.781 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:18.781 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:18.781 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.781 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:18.781 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:18.781 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:18.781 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.781 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:19.042 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.042 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:28:19.042 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:19.303 09:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:19.563 09:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:20.509 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:20.509 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:20.509 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:20.509 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:20.769 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:20.769 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:20.769 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:20.769 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:20.769 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:20.769 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:20.769 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:20.769 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:21.030 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.030 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:21.030 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.030 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:21.291 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.291 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:21.291 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.291 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:21.291 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.291 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:21.291 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.291 09:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:21.551 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.551 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:21.551 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:21.812 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:21.812 09:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:23.194 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:23.194 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:23.194 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:23.194 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:23.194 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:23.194 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:23.194 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:23.194 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:23.194 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:23.194 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:23.194 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:23.194 09:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:23.455 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:23.455 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:23.455 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:23.455 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:23.715 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:23.715 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:23.715 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:23.715 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:23.975 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:23.975 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:23.975 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:23.975 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:23.975 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:23.975 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:23.975 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:24.236 09:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:24.496 09:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:25.435 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:25.435 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:25.435 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.435 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:25.696 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:25.696 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:25.696 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.696 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:25.696 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:25.696 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:25.696 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.696 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:25.957 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:25.957 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:25.957 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.957 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:26.217 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:26.218 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:26.218 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.218 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:26.218 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:26.218 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:26.218 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.218 09:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:26.478 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:26.478 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:26.478 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:26.739 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:26.998 09:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:27.940 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:27.940 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:27.940 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:27.940 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:27.940 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:27.940 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:27.940 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:27.940 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:28.201 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:28.201 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:28.201 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.201 09:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:28.462 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:28.462 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:28.462 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.462 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:28.462 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:28.462 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:28.462 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.462 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:28.723 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:28.723 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:28.723 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:28.723 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:28.983 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:28.983 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:29.243 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:29.243 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:29.243 09:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:29.504 09:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:30.445 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:30.445 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:30.445 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:30.445 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:30.706 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:30.706 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:30.706 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:30.706 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:30.965 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:30.965 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:30.965 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:30.965 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:30.965 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:30.965 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:30.965 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:30.965 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:31.226 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:31.226 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:31.226 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:31.226 09:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:31.487 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:31.487 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:31.487 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:31.487 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:31.748 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:31.748 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:31.748 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:31.748 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:32.009 09:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:32.949 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:32.949 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:32.949 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:32.949 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:33.210 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:33.210 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:33.210 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:33.210 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:33.210 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:33.210 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:33.210 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:33.210 09:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:33.471 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:33.471 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:33.471 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:33.472 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:33.732 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:33.732 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:33.732 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:33.732 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:33.994 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:33.994 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:33.994 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:33.994 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:33.994 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:33.994 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:33.994 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:34.255 09:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:34.515 09:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:35.457 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:35.457 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:35.457 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:35.457 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:35.719 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:35.719 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:35.719 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:35.719 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:35.719 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:35.719 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:35.719 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:35.719 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:35.980 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:35.980 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:35.980 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:35.980 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:36.240 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:36.240 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:36.240 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:36.240 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:36.501 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:36.501 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:36.501 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:36.501 09:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:36.501 09:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:36.501 09:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:36.501 09:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:36.762 09:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:37.023 09:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:37.968 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:37.968 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:37.968 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:37.968 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:38.229 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.229 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:38.229 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.229 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:38.229 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:38.229 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:38.229 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.229 09:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:38.490 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.490 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:38.490 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.490 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:38.750 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.750 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:38.750 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.750 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:38.750 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.750 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:38.750 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.750 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:39.010 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:39.010 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 466871 00:28:39.010 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 466871 ']' 00:28:39.010 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 466871 00:28:39.010 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:39.010 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.010 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 466871 00:28:39.010 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:39.010 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:39.010 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 466871' 00:28:39.010 killing process with pid 466871 00:28:39.010 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 466871 00:28:39.010 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 466871 00:28:39.010 { 00:28:39.010 "results": [ 00:28:39.010 { 00:28:39.010 "job": "Nvme0n1", 00:28:39.010 "core_mask": "0x4", 00:28:39.010 "workload": "verify", 00:28:39.010 "status": "terminated", 00:28:39.010 "verify_range": { 00:28:39.010 "start": 0, 00:28:39.011 "length": 16384 00:28:39.011 }, 00:28:39.011 "queue_depth": 128, 00:28:39.011 "io_size": 4096, 00:28:39.011 "runtime": 26.871861, 00:28:39.011 "iops": 11151.255955067645, 00:28:39.011 "mibps": 43.55959357448299, 00:28:39.011 "io_failed": 0, 00:28:39.011 "io_timeout": 0, 00:28:39.011 "avg_latency_us": 11452.89873968397, 00:28:39.011 "min_latency_us": 771.4133333333333, 00:28:39.011 "max_latency_us": 3075822.933333333 00:28:39.011 } 00:28:39.011 ], 00:28:39.011 "core_count": 1 00:28:39.011 } 00:28:39.292 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 466871 00:28:39.292 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:39.292 [2024-11-19 09:45:56.816750] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:39.292 [2024-11-19 09:45:56.816829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466871 ] 00:28:39.292 [2024-11-19 09:45:56.908898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.292 [2024-11-19 09:45:56.959173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.292 Running I/O for 90 seconds... 00:28:39.292 10083.00 IOPS, 39.39 MiB/s [2024-11-19T08:46:26.040Z] 10178.50 IOPS, 39.76 MiB/s [2024-11-19T08:46:26.040Z] 10179.33 IOPS, 39.76 MiB/s [2024-11-19T08:46:26.040Z] 10549.00 IOPS, 41.21 MiB/s [2024-11-19T08:46:26.040Z] 10871.80 IOPS, 42.47 MiB/s [2024-11-19T08:46:26.040Z] 11096.33 IOPS, 43.35 MiB/s [2024-11-19T08:46:26.040Z] 11234.57 IOPS, 43.89 MiB/s [2024-11-19T08:46:26.040Z] 11350.25 IOPS, 44.34 MiB/s [2024-11-19T08:46:26.040Z] 11461.11 IOPS, 44.77 MiB/s [2024-11-19T08:46:26.040Z] 11515.40 IOPS, 44.98 MiB/s [2024-11-19T08:46:26.040Z] 11567.82 IOPS, 45.19 MiB/s [2024-11-19T08:46:26.040Z] [2024-11-19 09:46:10.787942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.787979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.787996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.292 [2024-11-19 09:46:10.788345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:39.292 [2024-11-19 09:46:10.788355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.788360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.788371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.788376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.788386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.788391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.788402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.788407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.788417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.788423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.788433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.788438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.788449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.788454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.788465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.788470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.788480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.788486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.788496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.788501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.788513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.788518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.788529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.788535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.788545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.788550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.788561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.788566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.293 [2024-11-19 09:46:10.789655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:39.293 [2024-11-19 09:46:10.789665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.789671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.789681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.789687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.789697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.789702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.789712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.789718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.789728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.789733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.789744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.789749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.789759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.789764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.789775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.789780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.789790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.789796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.789806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.789811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.789822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.789827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.789837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.789843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.789853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.789858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.789869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.789874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.294 [2024-11-19 09:46:10.790444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.294 [2024-11-19 09:46:10.790449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.295 [2024-11-19 09:46:10.790480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.295 [2024-11-19 09:46:10.790495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.295 [2024-11-19 09:46:10.790511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.295 [2024-11-19 09:46:10.790527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.295 [2024-11-19 09:46:10.790543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.295 [2024-11-19 09:46:10.790559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.295 [2024-11-19 09:46:10.790575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.295 [2024-11-19 09:46:10.790884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.295 [2024-11-19 09:46:10.790899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.295 [2024-11-19 09:46:10.790915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:39.295 [2024-11-19 09:46:10.790925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.295 [2024-11-19 09:46:10.790931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.790941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.296 [2024-11-19 09:46:10.790946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.790956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.296 [2024-11-19 09:46:10.790961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.790971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.296 [2024-11-19 09:46:10.790976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.790987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.296 [2024-11-19 09:46:10.790992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.296 [2024-11-19 09:46:10.791008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:39.296 [2024-11-19 09:46:10.791814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.296 [2024-11-19 09:46:10.791819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.791829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.791834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.791844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.791849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.791859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.791864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.791875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.791880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.791890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.791896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.791906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.791911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.791921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.791926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.791936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.791945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.791955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.791961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.791972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.791977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.791987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.791992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.792002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.792008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.792018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.792023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.792426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.792439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.792450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.792456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.792466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.792471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.792481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.792486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.792498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.792504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.792514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.792519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.792529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.792534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.792545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.792550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.792560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.803146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.803191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.803199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.803209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.803215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.803225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.803230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.803241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.803246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.803256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.803261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:39.297 [2024-11-19 09:46:10.803271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.297 [2024-11-19 09:46:10.803276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.803986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.298 [2024-11-19 09:46:10.803991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:39.298 [2024-11-19 09:46:10.804002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.299 [2024-11-19 09:46:10.804116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.299 [2024-11-19 09:46:10.804132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.299 [2024-11-19 09:46:10.804147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.299 [2024-11-19 09:46:10.804168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.299 [2024-11-19 09:46:10.804183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.299 [2024-11-19 09:46:10.804199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.299 [2024-11-19 09:46:10.804215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.299 [2024-11-19 09:46:10.804527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:39.299 [2024-11-19 09:46:10.804537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.300 [2024-11-19 09:46:10.804543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.300 [2024-11-19 09:46:10.804558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.300 [2024-11-19 09:46:10.804574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.300 [2024-11-19 09:46:10.804589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.300 [2024-11-19 09:46:10.804605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.300 [2024-11-19 09:46:10.804620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.300 [2024-11-19 09:46:10.804635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.300 [2024-11-19 09:46:10.804654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.804992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.804997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.805007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.805012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.805022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.300 [2024-11-19 09:46:10.805028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:39.300 [2024-11-19 09:46:10.805039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.805044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.805054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.805059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.805070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.805075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.805085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.805090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.805101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.805106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.805116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.805121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.805131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.805136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.805147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.805152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.805167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.805172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.805182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.805188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.805198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.805203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.805213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.805219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.805229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.805236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.301 [2024-11-19 09:46:10.806804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:39.301 [2024-11-19 09:46:10.806814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.806820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.806830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.806835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.806845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.806850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.806861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.806866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.806876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.806881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.806894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.806899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.806909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.806914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.806924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.806929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.806939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.806944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.806955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.806960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.806972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.806977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.806987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.806993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.807003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.807008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.807018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.807023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.807033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.807039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.807187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.807195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.807206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.807211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.807223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.807229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.814210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.814235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.814252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.814261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.814278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.814287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.814301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.814308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.814321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.814328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.814342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.814349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.814362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.814369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.814383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.814389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.814403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.302 [2024-11-19 09:46:10.814410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:39.302 [2024-11-19 09:46:10.814424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-11-19 09:46:10.814639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-11-19 09:46:10.814660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-11-19 09:46:10.814680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-11-19 09:46:10.814701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-11-19 09:46:10.814723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-11-19 09:46:10.814744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.303 [2024-11-19 09:46:10.814765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.814985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.814992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.815006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.815012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.815026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.815033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.815046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.815053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.815067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.815073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.815088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.815095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.815618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.815632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.815649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.815656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:39.303 [2024-11-19 09:46:10.815670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.303 [2024-11-19 09:46:10.815676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.815690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.815697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.815711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.304 [2024-11-19 09:46:10.815718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.815731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.304 [2024-11-19 09:46:10.815738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.815755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.304 [2024-11-19 09:46:10.815762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.815775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.304 [2024-11-19 09:46:10.815782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.815796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.304 [2024-11-19 09:46:10.815803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.815816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.304 [2024-11-19 09:46:10.815823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.815837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.304 [2024-11-19 09:46:10.815844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.815857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.304 [2024-11-19 09:46:10.815864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.815878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.815885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.815898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.815905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.815919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.815926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.815940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.815946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.815960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.815967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.815980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.815987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.816001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.816010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.816024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.816030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.816044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.816051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.816064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.816071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.816085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.816091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.816105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.816112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.816126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.816133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.816147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.816153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.816175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.816182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.816196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.816203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.816216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.816223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.816236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.816243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.816257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.816265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.816279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.816286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.816299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.816306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.816319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.816326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:39.304 [2024-11-19 09:46:10.816340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.304 [2024-11-19 09:46:10.816346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:39.305 [2024-11-19 09:46:10.816953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.305 [2024-11-19 09:46:10.816960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.816973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.816980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.816993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.817000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.817014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.817024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.817038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.817045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.817058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.817065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.817079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.817086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.817099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.817106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.817120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.817126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.817140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.817147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.817164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.817171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.817185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.817192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.817205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.817212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.817225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.817232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.817246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.817253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.817267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.817275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.817289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.817295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.817309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.817316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.818118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.818131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.818147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.818154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.818173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.818180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.818194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.818200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.818214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.818221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.818234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.818241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.818255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.818261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.818275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.818282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.818296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.818303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.818317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.818324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.818340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.818347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.818361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.818367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.818381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.818388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.818401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.818408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.818422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.818428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.818442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.818448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:39.306 [2024-11-19 09:46:10.818462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.306 [2024-11-19 09:46:10.818469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.307 [2024-11-19 09:46:10.818614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.307 [2024-11-19 09:46:10.818635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.307 [2024-11-19 09:46:10.818656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.307 [2024-11-19 09:46:10.818676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.307 [2024-11-19 09:46:10.818697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.307 [2024-11-19 09:46:10.818718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.307 [2024-11-19 09:46:10.818738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.818984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.818998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.819005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.819019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.819026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.819039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.819047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.819498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.819509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.819524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.819534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.819547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.307 [2024-11-19 09:46:10.819554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:39.307 [2024-11-19 09:46:10.819568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.819575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.819595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.308 [2024-11-19 09:46:10.819616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.308 [2024-11-19 09:46:10.819637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.308 [2024-11-19 09:46:10.819657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.308 [2024-11-19 09:46:10.819678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.308 [2024-11-19 09:46:10.819698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.308 [2024-11-19 09:46:10.819719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.308 [2024-11-19 09:46:10.819740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.308 [2024-11-19 09:46:10.819760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.819780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.819802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.819823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.819843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.819863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.819884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.819904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.819925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.819945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.819967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.819981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.819988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.820001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.820009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.820023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.820029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.820045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.820052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.820065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.820072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.820087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.820094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.820108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.820116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.820130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.820137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.820151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.820162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.820176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.820183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.820197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.820204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:39.308 [2024-11-19 09:46:10.820218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.308 [2024-11-19 09:46:10.820225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.820239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.820247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.820260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.820268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.820282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.820289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.820302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.820311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.820325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.820332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.820347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.820354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.820368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.820375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.820389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.820396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.820410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.820416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.820431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.820438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.820452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.820459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.820473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.820480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.820494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.820501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.820514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.820522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.820536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.820544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.820559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.820568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.825452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.825481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.825500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.825510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.825527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.825536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.825553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.825562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.828275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.828296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.828318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.828327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.828344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.828353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.828370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.828378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.828395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.828403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.828420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.828429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.828446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.828455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.828472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.828480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.828503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.828512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.828529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.828538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.828555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.828564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.828580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.828589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.828606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.828615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.828633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.828641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.828658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.828667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:39.309 [2024-11-19 09:46:10.828684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.309 [2024-11-19 09:46:10.828692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.828710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.828718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.828735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.828744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.828761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.828770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.828787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.828796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.828815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.828823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.828841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.828850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.828866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.828875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.828892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.828901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.828918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.828927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.828944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.828953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.828970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.828978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.828995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:39.310 [2024-11-19 09:46:10.829547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.310 [2024-11-19 09:46:10.829556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.829573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.829582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.829599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.829608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.829625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-11-19 09:46:10.829634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.829651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-11-19 09:46:10.829660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.829677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-11-19 09:46:10.829686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.829703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-11-19 09:46:10.829711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.829729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-11-19 09:46:10.829738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.829754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-11-19 09:46:10.829763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.829782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-11-19 09:46:10.829791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.829808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.829816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.829833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.829842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.829859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.829868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.829885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.829893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.829911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.829920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.829936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.829945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.829962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.829971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.829988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.829997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.830014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.830023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.830040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.830048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.830066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.830074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.830096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.830104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.830121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.830130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.830148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.830156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:39.311 11576.50 IOPS, 45.22 MiB/s [2024-11-19T08:46:26.059Z] [2024-11-19 09:46:10.831056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.831072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.831091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.831100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.831117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.831126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.831144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.831152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.831177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.831186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.831203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.311 [2024-11-19 09:46:10.831212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.831229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-11-19 09:46:10.831238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.831255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-11-19 09:46:10.831264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.831281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-11-19 09:46:10.831290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.831310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-11-19 09:46:10.831319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.831336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-11-19 09:46:10.831345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:39.311 [2024-11-19 09:46:10.831362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.312 [2024-11-19 09:46:10.831371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.312 [2024-11-19 09:46:10.831396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.312 [2024-11-19 09:46:10.831422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.831976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.831994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.832003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.832020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.832029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.832046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.832055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.832072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.832081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.832098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.832106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.832123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.832132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.832149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.832162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.832179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.832188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.832205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.832214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:39.312 [2024-11-19 09:46:10.832231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.312 [2024-11-19 09:46:10.832239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.832256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.832265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.832284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.832292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.832309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.832318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.832335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.832343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.832360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.832369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.832386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.832395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.832412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.832421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.832438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.832446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.832463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.832472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.832489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.832498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.833264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.833282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.833304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.833314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.833334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.833344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.833367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.833378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.833397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.833408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.833427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.833437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.833457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.833467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.833487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.833497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.833517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.833527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.833547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.833557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.833577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.833587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.833607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.833617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.833637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.833647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.833667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.313 [2024-11-19 09:46:10.833677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:39.313 [2024-11-19 09:46:10.833696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.833706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.833726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.833738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.833758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.833768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.833788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.833798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.833818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.833828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.833847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.833857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.833877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.833887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.833907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.833917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.833937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.833947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.833966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.833977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.833996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.314 [2024-11-19 09:46:10.834610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:39.314 [2024-11-19 09:46:10.834630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.834641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.834662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.834672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.834692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.834702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.834722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.834732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.834752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.834762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.834781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.834792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.834812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.834822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.834844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.315 [2024-11-19 09:46:10.834854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.834874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.315 [2024-11-19 09:46:10.834884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.834904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.315 [2024-11-19 09:46:10.834914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.834934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.315 [2024-11-19 09:46:10.834944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.834965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.315 [2024-11-19 09:46:10.834975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.834995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.315 [2024-11-19 09:46:10.835005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.835025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.315 [2024-11-19 09:46:10.835035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.835055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.835066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.835085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.835095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.835116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.835125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.835145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.835155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.835180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.835190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.835210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.835222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.835242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.835252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.835272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.835283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.835304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.835314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.835333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.835344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.835364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.835375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.835395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.835405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.835427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.835437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.836491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.836510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.836532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.836542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:39.315 [2024-11-19 09:46:10.836562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.315 [2024-11-19 09:46:10.836572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.836592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.836602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.836621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.836635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.836655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.836665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.836684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.836694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.836714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.316 [2024-11-19 09:46:10.836724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.836744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.316 [2024-11-19 09:46:10.836754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.836774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.316 [2024-11-19 09:46:10.836784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.836804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.316 [2024-11-19 09:46:10.836815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.836835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.316 [2024-11-19 09:46:10.836845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.836865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.316 [2024-11-19 09:46:10.836875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.836895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.316 [2024-11-19 09:46:10.836905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.836925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.316 [2024-11-19 09:46:10.836935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.836955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.836965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.836985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.836996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.837028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.837057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.837087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.837117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.837148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.837184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.837215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.837245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.837275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.837304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.837334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.837364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.837399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.837429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.837459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.837489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.316 [2024-11-19 09:46:10.837519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:39.316 [2024-11-19 09:46:10.837538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.837548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.837568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.837578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.837598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.837608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.837628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.837638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.837658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.837668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.837688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.837698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.837719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.837729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.837749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.837761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.837781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.837791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.837811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.837821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.837840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.837851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.837871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.837881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.837900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.837910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.837930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.837940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.837960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.837970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.837990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.838000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.838019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.838030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.838050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.838060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.838080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.838090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.838110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.838125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.838145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.838155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.838975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.838993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.839015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.839025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.839045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.839055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.839074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.839084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.839104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.839114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.839134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.839144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.839170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.839181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.839201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.839211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.839230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.839241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.839261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.839271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.839291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.839301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.839323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.839334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.839354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.839364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.839383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.839393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.839413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.317 [2024-11-19 09:46:10.839423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:39.317 [2024-11-19 09:46:10.839443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.839983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.839993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.840013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.840023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.840043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.840055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.840075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.840085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.840105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.840115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.840135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.840145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.840170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.318 [2024-11-19 09:46:10.840181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:39.318 [2024-11-19 09:46:10.840201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.319 [2024-11-19 09:46:10.840603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.319 [2024-11-19 09:46:10.840633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.319 [2024-11-19 09:46:10.840663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.319 [2024-11-19 09:46:10.840694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.319 [2024-11-19 09:46:10.840723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.319 [2024-11-19 09:46:10.840753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.319 [2024-11-19 09:46:10.840783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.840984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.840995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.841014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.841024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.841044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.841054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.841074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.841084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.841104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.841114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.841134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.841144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.842195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.842214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.842237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.842247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.842267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.319 [2024-11-19 09:46:10.842277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:39.319 [2024-11-19 09:46:10.842296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.842306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.842336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.842365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.842395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.842425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.320 [2024-11-19 09:46:10.842455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.320 [2024-11-19 09:46:10.842485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.320 [2024-11-19 09:46:10.842516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.320 [2024-11-19 09:46:10.842546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.320 [2024-11-19 09:46:10.842579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.320 [2024-11-19 09:46:10.842609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.320 [2024-11-19 09:46:10.842639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.320 [2024-11-19 09:46:10.842669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.842700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.842730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.842760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.842790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.842820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.842850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.842880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.842910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.842942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.842971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.842991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.843001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.843021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.843031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.843051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.843061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.843081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.843091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.843110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.843120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.843140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.843150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.843175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.843186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.843206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.320 [2024-11-19 09:46:10.843216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:39.320 [2024-11-19 09:46:10.843236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.843840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.843851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.844619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.844631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.844647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.844654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.844669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.844676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.844690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.844697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.844710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.844718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.844731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.844740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.844753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.844764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.844778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.844785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.844799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.844806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.844819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.844826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.844840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.844847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.844861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.844868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:39.321 [2024-11-19 09:46:10.844881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.321 [2024-11-19 09:46:10.844889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.844903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.844910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.844923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.844930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.844944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.844951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.844965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.844972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.844986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.844993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:39.322 [2024-11-19 09:46:10.845607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.322 [2024-11-19 09:46:10.845614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.845634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.845655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.845677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.845697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.845718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.845738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.845759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.323 [2024-11-19 09:46:10.845781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.323 [2024-11-19 09:46:10.845806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.323 [2024-11-19 09:46:10.845827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.323 [2024-11-19 09:46:10.845849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.323 [2024-11-19 09:46:10.845870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.323 [2024-11-19 09:46:10.845891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.323 [2024-11-19 09:46:10.845912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.845933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.845954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.845975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.845988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.845995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.846009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.846016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.846030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.846037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.846051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.846059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.846073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.846080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.846093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.846101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.846114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.846121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.846135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.846143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.846870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.846884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.846899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.846906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.846920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.846927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.846941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.846949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:39.323 [2024-11-19 09:46:10.846963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.323 [2024-11-19 09:46:10.846969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.846983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.846990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.324 [2024-11-19 09:46:10.847076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.324 [2024-11-19 09:46:10.847096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.324 [2024-11-19 09:46:10.847117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.324 [2024-11-19 09:46:10.847138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.324 [2024-11-19 09:46:10.847163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.324 [2024-11-19 09:46:10.847184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.324 [2024-11-19 09:46:10.847204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.324 [2024-11-19 09:46:10.847225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:39.324 [2024-11-19 09:46:10.847656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.324 [2024-11-19 09:46:10.847663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.847677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.847684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.847698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.847705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.847718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.847726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.847739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.847746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.847760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.847767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.847781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.847788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.847802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.847810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.847824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.847830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.847844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.847852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.847865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.847872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.847886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.847893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.847906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.847913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.847927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.847934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.847947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.847955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.847969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.847977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.847990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.847997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.848018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.848570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.848592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.848615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.848636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.848656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.848677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.848698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.848719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.848741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.848763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.848784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.848805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.848826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.848848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.848870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.325 [2024-11-19 09:46:10.848891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:39.325 [2024-11-19 09:46:10.848905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.848912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.848925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.848932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.848946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.848953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.848966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.848973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.848987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.848994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:39.326 [2024-11-19 09:46:10.849667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.326 [2024-11-19 09:46:10.849674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.849688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.849695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.849709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.849716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.849730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.327 [2024-11-19 09:46:10.849737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.849751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.327 [2024-11-19 09:46:10.849758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.849772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.327 [2024-11-19 09:46:10.849779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.849795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.327 [2024-11-19 09:46:10.849802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.849815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.327 [2024-11-19 09:46:10.849823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.849836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.327 [2024-11-19 09:46:10.849844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.849857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.327 [2024-11-19 09:46:10.849864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.849878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.849885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.849900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.849907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.849920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.849927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.849941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.849948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.849961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.849968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.849982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.849989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.850009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.850030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.850051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.850072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.850289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.850326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.850352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.850380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.850405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.850430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.850456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.850481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.850506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.327 [2024-11-19 09:46:10.850532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.327 [2024-11-19 09:46:10.850557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.327 [2024-11-19 09:46:10.850583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.327 [2024-11-19 09:46:10.850609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.327 [2024-11-19 09:46:10.850634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.327 [2024-11-19 09:46:10.850652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.327 [2024-11-19 09:46:10.850659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.850678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.328 [2024-11-19 09:46:10.850686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.850705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.328 [2024-11-19 09:46:10.850712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.850731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.328 [2024-11-19 09:46:10.850738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.850756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.850763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.850781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.850788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.850807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.850814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.850832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.850839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.850857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.850864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.850883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.850890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.850908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.850915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.850933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.850940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.850958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.850965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.850983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.850991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.851010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.851017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.851035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.851042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.851061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.851068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.851086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.851093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.851111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.851118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.851137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.851143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.851167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.851174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.851192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.851199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.851217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.851225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.851243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.851250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.851268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.851275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.851294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.851301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.851322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.851329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.851348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.328 [2024-11-19 09:46:10.851355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:39.328 [2024-11-19 09:46:10.851373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:10.851380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:10.851398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:10.851406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:10.851424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:10.851431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:10.851449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:10.851456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:10.851475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:10.851482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:10.851500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:10.851507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:10.851525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:10.851532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:10.851551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:10.851558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:10.851576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:10.851583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:10.851602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:10.851608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:10.851627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:10.851635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:10.851654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:10.851661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:10.851679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:10.851686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:10.851824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:10.851834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:39.329 10686.00 IOPS, 41.74 MiB/s [2024-11-19T08:46:26.077Z] 9922.71 IOPS, 38.76 MiB/s [2024-11-19T08:46:26.077Z] 9261.20 IOPS, 36.18 MiB/s [2024-11-19T08:46:26.077Z] 9432.31 IOPS, 36.84 MiB/s [2024-11-19T08:46:26.077Z] 9599.12 IOPS, 37.50 MiB/s [2024-11-19T08:46:26.077Z] 9902.78 IOPS, 38.68 MiB/s [2024-11-19T08:46:26.077Z] 10196.16 IOPS, 39.83 MiB/s [2024-11-19T08:46:26.077Z] 10406.75 IOPS, 40.65 MiB/s [2024-11-19T08:46:26.077Z] 10494.76 IOPS, 41.00 MiB/s [2024-11-19T08:46:26.077Z] 10564.64 IOPS, 41.27 MiB/s [2024-11-19T08:46:26.077Z] 10743.13 IOPS, 41.97 MiB/s [2024-11-19T08:46:26.077Z] 10943.38 IOPS, 42.75 MiB/s [2024-11-19T08:46:26.077Z] [2024-11-19 09:46:23.521482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.329 [2024-11-19 09:46:23.521966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:39.329 [2024-11-19 09:46:23.521977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.330 [2024-11-19 09:46:23.521984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.521995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.330 [2024-11-19 09:46:23.522000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.330 [2024-11-19 09:46:23.522016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.330 [2024-11-19 09:46:23.522032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.330 [2024-11-19 09:46:23.522047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.330 [2024-11-19 09:46:23.522063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.330 [2024-11-19 09:46:23.522078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.330 [2024-11-19 09:46:23.522093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.330 [2024-11-19 09:46:23.522108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.330 [2024-11-19 09:46:23.522124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.330 [2024-11-19 09:46:23.522516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:39.330 [2024-11-19 09:46:23.522526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.331 [2024-11-19 09:46:23.522531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:39.331 [2024-11-19 09:46:23.522541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.331 [2024-11-19 09:46:23.522546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:39.331 [2024-11-19 09:46:23.522557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.331 [2024-11-19 09:46:23.522563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:39.331 [2024-11-19 09:46:23.522573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.331 [2024-11-19 09:46:23.522578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:39.331 [2024-11-19 09:46:23.522588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.331 [2024-11-19 09:46:23.522593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:39.331 [2024-11-19 09:46:23.522603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.331 [2024-11-19 09:46:23.522609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:39.331 [2024-11-19 09:46:23.522619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.331 [2024-11-19 09:46:23.522624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:39.331 [2024-11-19 09:46:23.522634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.331 [2024-11-19 09:46:23.522639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:39.331 [2024-11-19 09:46:23.522649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.331 [2024-11-19 09:46:23.522654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:39.331 [2024-11-19 09:46:23.522665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.331 [2024-11-19 09:46:23.522670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:39.331 11071.24 IOPS, 43.25 MiB/s [2024-11-19T08:46:26.079Z] 11118.23 IOPS, 43.43 MiB/s [2024-11-19T08:46:26.079Z] Received shutdown signal, test time was about 26.872482 seconds 00:28:39.331 00:28:39.331 Latency(us) 00:28:39.331 [2024-11-19T08:46:26.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.331 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:39.331 Verification LBA range: start 0x0 length 0x4000 00:28:39.331 Nvme0n1 : 26.87 11151.26 43.56 0.00 0.00 11452.90 771.41 3075822.93 00:28:39.331 [2024-11-19T08:46:26.079Z] =================================================================================================================== 00:28:39.331 [2024-11-19T08:46:26.079Z] Total : 11151.26 43.56 0.00 0.00 11452.90 771.41 3075822.93 00:28:39.331 09:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:39.591 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:39.591 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:39.591 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:39.591 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:39.591 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:28:39.591 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:39.591 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:28:39.591 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:39.591 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:39.591 rmmod nvme_tcp 00:28:39.591 rmmod nvme_fabrics 00:28:39.591 rmmod nvme_keyring 00:28:39.591 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:39.591 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 466480 ']' 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 466480 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 466480 ']' 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 466480 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 466480 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 466480' 00:28:39.592 killing process with pid 466480 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 466480 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 466480 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.592 09:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.137 09:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:42.137 00:28:42.137 real 0m41.504s 00:28:42.137 user 1m47.576s 00:28:42.137 sys 0m11.402s 00:28:42.137 09:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:42.137 09:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:42.137 ************************************ 00:28:42.137 END TEST nvmf_host_multipath_status 00:28:42.137 ************************************ 00:28:42.137 09:46:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:42.137 09:46:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:42.137 09:46:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:42.137 09:46:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.137 ************************************ 00:28:42.137 START TEST nvmf_discovery_remove_ifc 00:28:42.137 ************************************ 00:28:42.137 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:42.137 * Looking for test storage... 00:28:42.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:42.137 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:42.137 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:28:42.137 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:42.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.138 --rc genhtml_branch_coverage=1 00:28:42.138 --rc genhtml_function_coverage=1 00:28:42.138 --rc genhtml_legend=1 00:28:42.138 --rc geninfo_all_blocks=1 00:28:42.138 --rc geninfo_unexecuted_blocks=1 00:28:42.138 00:28:42.138 ' 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:42.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.138 --rc genhtml_branch_coverage=1 00:28:42.138 --rc genhtml_function_coverage=1 00:28:42.138 --rc genhtml_legend=1 00:28:42.138 --rc geninfo_all_blocks=1 00:28:42.138 --rc geninfo_unexecuted_blocks=1 00:28:42.138 00:28:42.138 ' 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:42.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.138 --rc genhtml_branch_coverage=1 00:28:42.138 --rc genhtml_function_coverage=1 00:28:42.138 --rc genhtml_legend=1 00:28:42.138 --rc geninfo_all_blocks=1 00:28:42.138 --rc geninfo_unexecuted_blocks=1 00:28:42.138 00:28:42.138 ' 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:42.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.138 --rc genhtml_branch_coverage=1 00:28:42.138 --rc genhtml_function_coverage=1 00:28:42.138 --rc genhtml_legend=1 00:28:42.138 --rc geninfo_all_blocks=1 00:28:42.138 --rc geninfo_unexecuted_blocks=1 00:28:42.138 00:28:42.138 ' 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:42.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:42.138 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:28:42.139 09:46:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.275 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:50.275 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:50.276 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:50.276 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:50.276 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:50.276 09:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:50.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:28:50.276 00:28:50.276 --- 10.0.0.2 ping statistics --- 00:28:50.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.276 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:28:50.276 00:28:50.276 --- 10.0.0.1 ping statistics --- 00:28:50.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.276 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=476966 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 476966 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 476966 ']' 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.276 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.277 09:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:50.277 [2024-11-19 09:46:36.230650] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:50.277 [2024-11-19 09:46:36.230712] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.277 [2024-11-19 09:46:36.328449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.277 [2024-11-19 09:46:36.378707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.277 [2024-11-19 09:46:36.378756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.277 [2024-11-19 09:46:36.378765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.277 [2024-11-19 09:46:36.378772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.277 [2024-11-19 09:46:36.378778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.277 [2024-11-19 09:46:36.379540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:50.537 [2024-11-19 09:46:37.101849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.537 [2024-11-19 09:46:37.110080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:50.537 null0 00:28:50.537 [2024-11-19 09:46:37.142047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=477075 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 477075 /tmp/host.sock 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 477075 ']' 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:50.537 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.537 09:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:50.537 [2024-11-19 09:46:37.219625] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:50.537 [2024-11-19 09:46:37.219693] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477075 ] 00:28:50.797 [2024-11-19 09:46:37.312992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.797 [2024-11-19 09:46:37.365824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.368 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.368 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:51.368 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:51.368 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:51.368 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.368 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:51.368 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.368 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:51.368 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.368 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:51.629 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.629 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:51.629 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.629 09:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:52.575 [2024-11-19 09:46:39.191124] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:52.575 [2024-11-19 09:46:39.191144] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:52.575 [2024-11-19 09:46:39.191162] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:52.839 [2024-11-19 09:46:39.319615] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:52.839 [2024-11-19 09:46:39.500740] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:28:52.839 [2024-11-19 09:46:39.501712] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xbe53f0:1 started. 00:28:52.839 [2024-11-19 09:46:39.503347] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:52.839 [2024-11-19 09:46:39.503390] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:52.839 [2024-11-19 09:46:39.503412] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:52.839 [2024-11-19 09:46:39.503425] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:52.839 [2024-11-19 09:46:39.503445] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:52.839 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.839 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:52.839 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:52.839 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:52.839 [2024-11-19 09:46:39.510098] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xbe53f0 was disconnected and freed. delete nvme_qpair. 00:28:52.839 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:52.839 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.839 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:52.840 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:52.840 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:52.840 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.840 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:52.840 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:52.840 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:53.099 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:53.099 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:53.099 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:53.099 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:53.099 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.099 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:53.099 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:53.099 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:53.100 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.100 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:53.100 09:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:54.039 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:54.039 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:54.039 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:54.039 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.039 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:54.039 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:54.039 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:54.039 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.039 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:54.039 09:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:55.423 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:55.423 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:55.423 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:55.423 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.423 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:55.423 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:55.423 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:55.424 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.424 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:55.424 09:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:56.366 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:56.366 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:56.366 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:56.366 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.366 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:56.366 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:56.366 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:56.366 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.366 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:56.366 09:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:57.307 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:57.307 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:57.307 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:57.307 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.307 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:57.307 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:57.307 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:57.307 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.307 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:57.307 09:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:58.252 [2024-11-19 09:46:44.943935] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:58.252 [2024-11-19 09:46:44.943975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.253 [2024-11-19 09:46:44.943986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.253 [2024-11-19 09:46:44.943994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.253 [2024-11-19 09:46:44.943999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.253 [2024-11-19 09:46:44.944005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.253 [2024-11-19 09:46:44.944010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.253 [2024-11-19 09:46:44.944017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.253 [2024-11-19 09:46:44.944023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.253 [2024-11-19 09:46:44.944033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.253 [2024-11-19 09:46:44.944038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.253 [2024-11-19 09:46:44.944043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc1c00 is same with the state(6) to be set 00:28:58.253 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:58.253 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:58.253 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:58.253 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.253 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:58.253 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:58.253 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:58.253 [2024-11-19 09:46:44.953956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc1c00 (9): Bad file descriptor 00:28:58.253 [2024-11-19 09:46:44.963991] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:58.253 [2024-11-19 09:46:44.964001] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:58.253 [2024-11-19 09:46:44.964005] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:58.253 [2024-11-19 09:46:44.964008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:58.253 [2024-11-19 09:46:44.964027] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:58.253 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.514 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:58.514 09:46:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:59.457 [2024-11-19 09:46:45.979246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:59.457 [2024-11-19 09:46:45.979338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc1c00 with addr=10.0.0.2, port=4420 00:28:59.457 [2024-11-19 09:46:45.979370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc1c00 is same with the state(6) to be set 00:28:59.457 [2024-11-19 09:46:45.979428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc1c00 (9): Bad file descriptor 00:28:59.457 [2024-11-19 09:46:45.979541] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:28:59.457 [2024-11-19 09:46:45.979599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:59.457 [2024-11-19 09:46:45.979621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:59.457 [2024-11-19 09:46:45.979645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:59.457 [2024-11-19 09:46:45.979666] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:59.457 [2024-11-19 09:46:45.979682] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:59.457 [2024-11-19 09:46:45.979695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:59.457 [2024-11-19 09:46:45.979717] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:59.457 [2024-11-19 09:46:45.979743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:59.457 09:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:59.457 09:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:59.457 09:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:59.457 09:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.457 09:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:59.457 09:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:59.457 09:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:59.457 09:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.457 09:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:59.457 09:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:00.401 [2024-11-19 09:46:46.982150] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:00.401 [2024-11-19 09:46:46.982170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:00.401 [2024-11-19 09:46:46.982179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:00.401 [2024-11-19 09:46:46.982184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:00.401 [2024-11-19 09:46:46.982190] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:29:00.401 [2024-11-19 09:46:46.982195] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:00.401 [2024-11-19 09:46:46.982199] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:00.401 [2024-11-19 09:46:46.982202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:00.401 [2024-11-19 09:46:46.982219] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:29:00.401 [2024-11-19 09:46:46.982238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.401 [2024-11-19 09:46:46.982245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.401 [2024-11-19 09:46:46.982252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.401 [2024-11-19 09:46:46.982257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.401 [2024-11-19 09:46:46.982263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.401 [2024-11-19 09:46:46.982268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.401 [2024-11-19 09:46:46.982274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.401 [2024-11-19 09:46:46.982279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.401 [2024-11-19 09:46:46.982285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.401 [2024-11-19 09:46:46.982290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.401 [2024-11-19 09:46:46.982298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:29:00.401 [2024-11-19 09:46:46.982793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1340 (9): Bad file descriptor 00:29:00.401 [2024-11-19 09:46:46.983803] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:00.401 [2024-11-19 09:46:46.983811] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:29:00.401 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:00.401 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:00.401 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:00.401 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.401 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:00.401 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:00.401 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:00.401 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.401 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:00.401 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.401 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.662 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:00.662 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:00.662 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:00.662 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:00.662 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.662 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:00.662 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:00.662 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:00.662 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.662 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:00.662 09:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:01.652 09:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:01.652 09:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:01.652 09:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:01.652 09:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.652 09:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:01.652 09:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:01.652 09:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:01.652 09:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.652 09:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:01.652 09:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:02.651 [2024-11-19 09:46:49.037024] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:02.651 [2024-11-19 09:46:49.037038] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:02.651 [2024-11-19 09:46:49.037047] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:02.651 [2024-11-19 09:46:49.167433] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:29:02.651 [2024-11-19 09:46:49.223064] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:29:02.651 [2024-11-19 09:46:49.223874] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xbb6130:1 started. 00:29:02.651 [2024-11-19 09:46:49.224782] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:02.651 [2024-11-19 09:46:49.224812] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:02.651 [2024-11-19 09:46:49.224828] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:02.651 [2024-11-19 09:46:49.224839] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:29:02.651 [2024-11-19 09:46:49.224845] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:02.651 [2024-11-19 09:46:49.233731] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xbb6130 was disconnected and freed. delete nvme_qpair. 00:29:02.651 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:02.651 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:02.651 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:02.651 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.651 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:02.651 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:02.651 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:02.651 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.651 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:02.651 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:02.651 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 477075 00:29:02.651 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 477075 ']' 00:29:02.651 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 477075 00:29:02.651 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:29:02.651 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.651 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 477075 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 477075' 00:29:02.926 killing process with pid 477075 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 477075 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 477075 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:02.926 rmmod nvme_tcp 00:29:02.926 rmmod nvme_fabrics 00:29:02.926 rmmod nvme_keyring 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 476966 ']' 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 476966 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 476966 ']' 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 476966 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 476966 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 476966' 00:29:02.926 killing process with pid 476966 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 476966 00:29:02.926 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 476966 00:29:03.192 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:03.192 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:03.192 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:03.192 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:29:03.192 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:29:03.192 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:03.192 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:29:03.192 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:03.192 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:03.192 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.192 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.192 09:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.133 09:46:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:05.133 00:29:05.133 real 0m23.414s 00:29:05.133 user 0m27.480s 00:29:05.133 sys 0m7.167s 00:29:05.133 09:46:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:05.133 09:46:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:05.133 ************************************ 00:29:05.133 END TEST nvmf_discovery_remove_ifc 00:29:05.133 ************************************ 00:29:05.403 09:46:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:05.403 09:46:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:05.403 09:46:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:05.403 09:46:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.403 ************************************ 00:29:05.403 START TEST nvmf_identify_kernel_target 00:29:05.403 ************************************ 00:29:05.403 09:46:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:05.403 * Looking for test storage... 00:29:05.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:05.403 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:05.403 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:29:05.403 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:05.403 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:05.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.404 --rc genhtml_branch_coverage=1 00:29:05.404 --rc genhtml_function_coverage=1 00:29:05.404 --rc genhtml_legend=1 00:29:05.404 --rc geninfo_all_blocks=1 00:29:05.404 --rc geninfo_unexecuted_blocks=1 00:29:05.404 00:29:05.404 ' 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:05.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.404 --rc genhtml_branch_coverage=1 00:29:05.404 --rc genhtml_function_coverage=1 00:29:05.404 --rc genhtml_legend=1 00:29:05.404 --rc geninfo_all_blocks=1 00:29:05.404 --rc geninfo_unexecuted_blocks=1 00:29:05.404 00:29:05.404 ' 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:05.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.404 --rc genhtml_branch_coverage=1 00:29:05.404 --rc genhtml_function_coverage=1 00:29:05.404 --rc genhtml_legend=1 00:29:05.404 --rc geninfo_all_blocks=1 00:29:05.404 --rc geninfo_unexecuted_blocks=1 00:29:05.404 00:29:05.404 ' 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:05.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.404 --rc genhtml_branch_coverage=1 00:29:05.404 --rc genhtml_function_coverage=1 00:29:05.404 --rc genhtml_legend=1 00:29:05.404 --rc geninfo_all_blocks=1 00:29:05.404 --rc geninfo_unexecuted_blocks=1 00:29:05.404 00:29:05.404 ' 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.404 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:05.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.675 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:05.676 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:05.676 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:05.676 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.676 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.676 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.676 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:05.676 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:05.676 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:29:05.676 09:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:14.010 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:14.010 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:14.010 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.010 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:14.010 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:14.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:29:14.011 00:29:14.011 --- 10.0.0.2 ping statistics --- 00:29:14.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.011 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:29:14.011 00:29:14.011 --- 10.0.0.1 ping statistics --- 00:29:14.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.011 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:14.011 09:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:16.772 Waiting for block devices as requested 00:29:16.772 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:16.772 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:16.772 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:16.772 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:16.772 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:16.772 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:17.085 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:17.085 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:17.085 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:17.428 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:17.428 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:17.428 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:17.712 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:17.712 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:17.712 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:17.974 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:17.974 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:18.234 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:18.234 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:18.234 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:29:18.234 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:18.234 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:18.234 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:18.234 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:29:18.234 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:18.234 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:18.234 No valid GPT data, bailing 00:29:18.234 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:18.234 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:29:18.234 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:29:18.234 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:29:18.234 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:29:18.235 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:18.235 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:18.235 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:18.235 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:18.235 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:29:18.235 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:29:18.235 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:29:18.235 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:29:18.235 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:29:18.235 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:29:18.235 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:29:18.235 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:18.235 09:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:29:18.496 00:29:18.496 Discovery Log Number of Records 2, Generation counter 2 00:29:18.496 =====Discovery Log Entry 0====== 00:29:18.496 trtype: tcp 00:29:18.496 adrfam: ipv4 00:29:18.496 subtype: current discovery subsystem 00:29:18.496 treq: not specified, sq flow control disable supported 00:29:18.496 portid: 1 00:29:18.496 trsvcid: 4420 00:29:18.496 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:18.496 traddr: 10.0.0.1 00:29:18.496 eflags: none 00:29:18.496 sectype: none 00:29:18.496 =====Discovery Log Entry 1====== 00:29:18.496 trtype: tcp 00:29:18.496 adrfam: ipv4 00:29:18.496 subtype: nvme subsystem 00:29:18.496 treq: not specified, sq flow control disable supported 00:29:18.496 portid: 1 00:29:18.496 trsvcid: 4420 00:29:18.496 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:18.496 traddr: 10.0.0.1 00:29:18.496 eflags: none 00:29:18.496 sectype: none 00:29:18.496 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:29:18.496 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:29:18.496 ===================================================== 00:29:18.496 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:18.496 ===================================================== 00:29:18.496 Controller Capabilities/Features 00:29:18.496 ================================ 00:29:18.496 Vendor ID: 0000 00:29:18.496 Subsystem Vendor ID: 0000 00:29:18.496 Serial Number: 3c5d077e95e41c9a2560 00:29:18.496 Model Number: Linux 00:29:18.496 Firmware Version: 6.8.9-20 00:29:18.496 Recommended Arb Burst: 0 00:29:18.496 IEEE OUI Identifier: 00 00 00 00:29:18.496 Multi-path I/O 00:29:18.496 May have multiple subsystem ports: No 00:29:18.496 May have multiple controllers: No 00:29:18.496 Associated with SR-IOV VF: No 00:29:18.496 Max Data Transfer Size: Unlimited 00:29:18.496 Max Number of Namespaces: 0 00:29:18.496 Max Number of I/O Queues: 1024 00:29:18.496 NVMe Specification Version (VS): 1.3 00:29:18.496 NVMe Specification Version (Identify): 1.3 00:29:18.496 Maximum Queue Entries: 1024 00:29:18.496 Contiguous Queues Required: No 00:29:18.496 Arbitration Mechanisms Supported 00:29:18.496 Weighted Round Robin: Not Supported 00:29:18.496 Vendor Specific: Not Supported 00:29:18.496 Reset Timeout: 7500 ms 00:29:18.496 Doorbell Stride: 4 bytes 00:29:18.496 NVM Subsystem Reset: Not Supported 00:29:18.496 Command Sets Supported 00:29:18.496 NVM Command Set: Supported 00:29:18.496 Boot Partition: Not Supported 00:29:18.496 Memory Page Size Minimum: 4096 bytes 00:29:18.496 Memory Page Size Maximum: 4096 bytes 00:29:18.496 Persistent Memory Region: Not Supported 00:29:18.496 Optional Asynchronous Events Supported 00:29:18.496 Namespace Attribute Notices: Not Supported 00:29:18.496 Firmware Activation Notices: Not Supported 00:29:18.496 ANA Change Notices: Not Supported 00:29:18.496 PLE Aggregate Log Change Notices: Not Supported 00:29:18.496 LBA Status Info Alert Notices: Not Supported 00:29:18.496 EGE Aggregate Log Change Notices: Not Supported 00:29:18.496 Normal NVM Subsystem Shutdown event: Not Supported 00:29:18.496 Zone Descriptor Change Notices: Not Supported 00:29:18.496 Discovery Log Change Notices: Supported 00:29:18.496 Controller Attributes 00:29:18.496 128-bit Host Identifier: Not Supported 00:29:18.496 Non-Operational Permissive Mode: Not Supported 00:29:18.496 NVM Sets: Not Supported 00:29:18.496 Read Recovery Levels: Not Supported 00:29:18.496 Endurance Groups: Not Supported 00:29:18.496 Predictable Latency Mode: Not Supported 00:29:18.496 Traffic Based Keep ALive: Not Supported 00:29:18.496 Namespace Granularity: Not Supported 00:29:18.496 SQ Associations: Not Supported 00:29:18.496 UUID List: Not Supported 00:29:18.496 Multi-Domain Subsystem: Not Supported 00:29:18.496 Fixed Capacity Management: Not Supported 00:29:18.496 Variable Capacity Management: Not Supported 00:29:18.496 Delete Endurance Group: Not Supported 00:29:18.496 Delete NVM Set: Not Supported 00:29:18.496 Extended LBA Formats Supported: Not Supported 00:29:18.496 Flexible Data Placement Supported: Not Supported 00:29:18.496 00:29:18.496 Controller Memory Buffer Support 00:29:18.496 ================================ 00:29:18.496 Supported: No 00:29:18.496 00:29:18.496 Persistent Memory Region Support 00:29:18.496 ================================ 00:29:18.496 Supported: No 00:29:18.496 00:29:18.496 Admin Command Set Attributes 00:29:18.496 ============================ 00:29:18.496 Security Send/Receive: Not Supported 00:29:18.496 Format NVM: Not Supported 00:29:18.496 Firmware Activate/Download: Not Supported 00:29:18.496 Namespace Management: Not Supported 00:29:18.496 Device Self-Test: Not Supported 00:29:18.496 Directives: Not Supported 00:29:18.496 NVMe-MI: Not Supported 00:29:18.496 Virtualization Management: Not Supported 00:29:18.496 Doorbell Buffer Config: Not Supported 00:29:18.496 Get LBA Status Capability: Not Supported 00:29:18.496 Command & Feature Lockdown Capability: Not Supported 00:29:18.496 Abort Command Limit: 1 00:29:18.496 Async Event Request Limit: 1 00:29:18.496 Number of Firmware Slots: N/A 00:29:18.496 Firmware Slot 1 Read-Only: N/A 00:29:18.496 Firmware Activation Without Reset: N/A 00:29:18.496 Multiple Update Detection Support: N/A 00:29:18.496 Firmware Update Granularity: No Information Provided 00:29:18.496 Per-Namespace SMART Log: No 00:29:18.496 Asymmetric Namespace Access Log Page: Not Supported 00:29:18.496 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:18.496 Command Effects Log Page: Not Supported 00:29:18.496 Get Log Page Extended Data: Supported 00:29:18.496 Telemetry Log Pages: Not Supported 00:29:18.496 Persistent Event Log Pages: Not Supported 00:29:18.496 Supported Log Pages Log Page: May Support 00:29:18.496 Commands Supported & Effects Log Page: Not Supported 00:29:18.496 Feature Identifiers & Effects Log Page:May Support 00:29:18.496 NVMe-MI Commands & Effects Log Page: May Support 00:29:18.496 Data Area 4 for Telemetry Log: Not Supported 00:29:18.496 Error Log Page Entries Supported: 1 00:29:18.496 Keep Alive: Not Supported 00:29:18.496 00:29:18.496 NVM Command Set Attributes 00:29:18.496 ========================== 00:29:18.496 Submission Queue Entry Size 00:29:18.496 Max: 1 00:29:18.496 Min: 1 00:29:18.496 Completion Queue Entry Size 00:29:18.496 Max: 1 00:29:18.496 Min: 1 00:29:18.496 Number of Namespaces: 0 00:29:18.496 Compare Command: Not Supported 00:29:18.496 Write Uncorrectable Command: Not Supported 00:29:18.496 Dataset Management Command: Not Supported 00:29:18.496 Write Zeroes Command: Not Supported 00:29:18.496 Set Features Save Field: Not Supported 00:29:18.496 Reservations: Not Supported 00:29:18.496 Timestamp: Not Supported 00:29:18.496 Copy: Not Supported 00:29:18.496 Volatile Write Cache: Not Present 00:29:18.496 Atomic Write Unit (Normal): 1 00:29:18.496 Atomic Write Unit (PFail): 1 00:29:18.496 Atomic Compare & Write Unit: 1 00:29:18.496 Fused Compare & Write: Not Supported 00:29:18.496 Scatter-Gather List 00:29:18.496 SGL Command Set: Supported 00:29:18.496 SGL Keyed: Not Supported 00:29:18.496 SGL Bit Bucket Descriptor: Not Supported 00:29:18.496 SGL Metadata Pointer: Not Supported 00:29:18.496 Oversized SGL: Not Supported 00:29:18.496 SGL Metadata Address: Not Supported 00:29:18.496 SGL Offset: Supported 00:29:18.496 Transport SGL Data Block: Not Supported 00:29:18.496 Replay Protected Memory Block: Not Supported 00:29:18.496 00:29:18.496 Firmware Slot Information 00:29:18.496 ========================= 00:29:18.496 Active slot: 0 00:29:18.496 00:29:18.496 00:29:18.496 Error Log 00:29:18.496 ========= 00:29:18.496 00:29:18.496 Active Namespaces 00:29:18.496 ================= 00:29:18.496 Discovery Log Page 00:29:18.496 ================== 00:29:18.496 Generation Counter: 2 00:29:18.496 Number of Records: 2 00:29:18.496 Record Format: 0 00:29:18.496 00:29:18.496 Discovery Log Entry 0 00:29:18.496 ---------------------- 00:29:18.496 Transport Type: 3 (TCP) 00:29:18.496 Address Family: 1 (IPv4) 00:29:18.496 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:18.496 Entry Flags: 00:29:18.496 Duplicate Returned Information: 0 00:29:18.496 Explicit Persistent Connection Support for Discovery: 0 00:29:18.496 Transport Requirements: 00:29:18.496 Secure Channel: Not Specified 00:29:18.496 Port ID: 1 (0x0001) 00:29:18.496 Controller ID: 65535 (0xffff) 00:29:18.496 Admin Max SQ Size: 32 00:29:18.496 Transport Service Identifier: 4420 00:29:18.496 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:18.496 Transport Address: 10.0.0.1 00:29:18.496 Discovery Log Entry 1 00:29:18.496 ---------------------- 00:29:18.496 Transport Type: 3 (TCP) 00:29:18.496 Address Family: 1 (IPv4) 00:29:18.496 Subsystem Type: 2 (NVM Subsystem) 00:29:18.496 Entry Flags: 00:29:18.496 Duplicate Returned Information: 0 00:29:18.496 Explicit Persistent Connection Support for Discovery: 0 00:29:18.496 Transport Requirements: 00:29:18.496 Secure Channel: Not Specified 00:29:18.496 Port ID: 1 (0x0001) 00:29:18.496 Controller ID: 65535 (0xffff) 00:29:18.496 Admin Max SQ Size: 32 00:29:18.496 Transport Service Identifier: 4420 00:29:18.496 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:29:18.496 Transport Address: 10.0.0.1 00:29:18.496 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:18.496 get_feature(0x01) failed 00:29:18.496 get_feature(0x02) failed 00:29:18.496 get_feature(0x04) failed 00:29:18.496 ===================================================== 00:29:18.496 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:18.496 ===================================================== 00:29:18.496 Controller Capabilities/Features 00:29:18.496 ================================ 00:29:18.496 Vendor ID: 0000 00:29:18.496 Subsystem Vendor ID: 0000 00:29:18.496 Serial Number: ba48ffeda0e6d303d944 00:29:18.496 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:29:18.496 Firmware Version: 6.8.9-20 00:29:18.496 Recommended Arb Burst: 6 00:29:18.496 IEEE OUI Identifier: 00 00 00 00:29:18.496 Multi-path I/O 00:29:18.496 May have multiple subsystem ports: Yes 00:29:18.496 May have multiple controllers: Yes 00:29:18.496 Associated with SR-IOV VF: No 00:29:18.496 Max Data Transfer Size: Unlimited 00:29:18.496 Max Number of Namespaces: 1024 00:29:18.496 Max Number of I/O Queues: 128 00:29:18.496 NVMe Specification Version (VS): 1.3 00:29:18.496 NVMe Specification Version (Identify): 1.3 00:29:18.496 Maximum Queue Entries: 1024 00:29:18.496 Contiguous Queues Required: No 00:29:18.496 Arbitration Mechanisms Supported 00:29:18.496 Weighted Round Robin: Not Supported 00:29:18.496 Vendor Specific: Not Supported 00:29:18.496 Reset Timeout: 7500 ms 00:29:18.496 Doorbell Stride: 4 bytes 00:29:18.497 NVM Subsystem Reset: Not Supported 00:29:18.497 Command Sets Supported 00:29:18.497 NVM Command Set: Supported 00:29:18.497 Boot Partition: Not Supported 00:29:18.497 Memory Page Size Minimum: 4096 bytes 00:29:18.497 Memory Page Size Maximum: 4096 bytes 00:29:18.497 Persistent Memory Region: Not Supported 00:29:18.497 Optional Asynchronous Events Supported 00:29:18.497 Namespace Attribute Notices: Supported 00:29:18.497 Firmware Activation Notices: Not Supported 00:29:18.497 ANA Change Notices: Supported 00:29:18.497 PLE Aggregate Log Change Notices: Not Supported 00:29:18.497 LBA Status Info Alert Notices: Not Supported 00:29:18.497 EGE Aggregate Log Change Notices: Not Supported 00:29:18.497 Normal NVM Subsystem Shutdown event: Not Supported 00:29:18.497 Zone Descriptor Change Notices: Not Supported 00:29:18.497 Discovery Log Change Notices: Not Supported 00:29:18.497 Controller Attributes 00:29:18.497 128-bit Host Identifier: Supported 00:29:18.497 Non-Operational Permissive Mode: Not Supported 00:29:18.497 NVM Sets: Not Supported 00:29:18.497 Read Recovery Levels: Not Supported 00:29:18.497 Endurance Groups: Not Supported 00:29:18.497 Predictable Latency Mode: Not Supported 00:29:18.497 Traffic Based Keep ALive: Supported 00:29:18.497 Namespace Granularity: Not Supported 00:29:18.497 SQ Associations: Not Supported 00:29:18.497 UUID List: Not Supported 00:29:18.497 Multi-Domain Subsystem: Not Supported 00:29:18.497 Fixed Capacity Management: Not Supported 00:29:18.497 Variable Capacity Management: Not Supported 00:29:18.497 Delete Endurance Group: Not Supported 00:29:18.497 Delete NVM Set: Not Supported 00:29:18.497 Extended LBA Formats Supported: Not Supported 00:29:18.497 Flexible Data Placement Supported: Not Supported 00:29:18.497 00:29:18.497 Controller Memory Buffer Support 00:29:18.497 ================================ 00:29:18.497 Supported: No 00:29:18.497 00:29:18.497 Persistent Memory Region Support 00:29:18.497 ================================ 00:29:18.497 Supported: No 00:29:18.497 00:29:18.497 Admin Command Set Attributes 00:29:18.497 ============================ 00:29:18.497 Security Send/Receive: Not Supported 00:29:18.497 Format NVM: Not Supported 00:29:18.497 Firmware Activate/Download: Not Supported 00:29:18.497 Namespace Management: Not Supported 00:29:18.497 Device Self-Test: Not Supported 00:29:18.497 Directives: Not Supported 00:29:18.497 NVMe-MI: Not Supported 00:29:18.497 Virtualization Management: Not Supported 00:29:18.497 Doorbell Buffer Config: Not Supported 00:29:18.497 Get LBA Status Capability: Not Supported 00:29:18.497 Command & Feature Lockdown Capability: Not Supported 00:29:18.497 Abort Command Limit: 4 00:29:18.497 Async Event Request Limit: 4 00:29:18.497 Number of Firmware Slots: N/A 00:29:18.497 Firmware Slot 1 Read-Only: N/A 00:29:18.497 Firmware Activation Without Reset: N/A 00:29:18.497 Multiple Update Detection Support: N/A 00:29:18.497 Firmware Update Granularity: No Information Provided 00:29:18.497 Per-Namespace SMART Log: Yes 00:29:18.497 Asymmetric Namespace Access Log Page: Supported 00:29:18.497 ANA Transition Time : 10 sec 00:29:18.497 00:29:18.497 Asymmetric Namespace Access Capabilities 00:29:18.497 ANA Optimized State : Supported 00:29:18.497 ANA Non-Optimized State : Supported 00:29:18.497 ANA Inaccessible State : Supported 00:29:18.497 ANA Persistent Loss State : Supported 00:29:18.497 ANA Change State : Supported 00:29:18.497 ANAGRPID is not changed : No 00:29:18.497 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:29:18.497 00:29:18.497 ANA Group Identifier Maximum : 128 00:29:18.497 Number of ANA Group Identifiers : 128 00:29:18.497 Max Number of Allowed Namespaces : 1024 00:29:18.497 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:29:18.497 Command Effects Log Page: Supported 00:29:18.497 Get Log Page Extended Data: Supported 00:29:18.497 Telemetry Log Pages: Not Supported 00:29:18.497 Persistent Event Log Pages: Not Supported 00:29:18.497 Supported Log Pages Log Page: May Support 00:29:18.497 Commands Supported & Effects Log Page: Not Supported 00:29:18.497 Feature Identifiers & Effects Log Page:May Support 00:29:18.497 NVMe-MI Commands & Effects Log Page: May Support 00:29:18.497 Data Area 4 for Telemetry Log: Not Supported 00:29:18.497 Error Log Page Entries Supported: 128 00:29:18.497 Keep Alive: Supported 00:29:18.497 Keep Alive Granularity: 1000 ms 00:29:18.497 00:29:18.497 NVM Command Set Attributes 00:29:18.497 ========================== 00:29:18.497 Submission Queue Entry Size 00:29:18.497 Max: 64 00:29:18.497 Min: 64 00:29:18.497 Completion Queue Entry Size 00:29:18.497 Max: 16 00:29:18.497 Min: 16 00:29:18.497 Number of Namespaces: 1024 00:29:18.497 Compare Command: Not Supported 00:29:18.497 Write Uncorrectable Command: Not Supported 00:29:18.497 Dataset Management Command: Supported 00:29:18.497 Write Zeroes Command: Supported 00:29:18.497 Set Features Save Field: Not Supported 00:29:18.497 Reservations: Not Supported 00:29:18.497 Timestamp: Not Supported 00:29:18.497 Copy: Not Supported 00:29:18.497 Volatile Write Cache: Present 00:29:18.497 Atomic Write Unit (Normal): 1 00:29:18.497 Atomic Write Unit (PFail): 1 00:29:18.497 Atomic Compare & Write Unit: 1 00:29:18.497 Fused Compare & Write: Not Supported 00:29:18.497 Scatter-Gather List 00:29:18.497 SGL Command Set: Supported 00:29:18.497 SGL Keyed: Not Supported 00:29:18.497 SGL Bit Bucket Descriptor: Not Supported 00:29:18.497 SGL Metadata Pointer: Not Supported 00:29:18.497 Oversized SGL: Not Supported 00:29:18.497 SGL Metadata Address: Not Supported 00:29:18.497 SGL Offset: Supported 00:29:18.497 Transport SGL Data Block: Not Supported 00:29:18.497 Replay Protected Memory Block: Not Supported 00:29:18.497 00:29:18.497 Firmware Slot Information 00:29:18.497 ========================= 00:29:18.497 Active slot: 0 00:29:18.497 00:29:18.497 Asymmetric Namespace Access 00:29:18.497 =========================== 00:29:18.497 Change Count : 0 00:29:18.497 Number of ANA Group Descriptors : 1 00:29:18.497 ANA Group Descriptor : 0 00:29:18.497 ANA Group ID : 1 00:29:18.497 Number of NSID Values : 1 00:29:18.497 Change Count : 0 00:29:18.497 ANA State : 1 00:29:18.497 Namespace Identifier : 1 00:29:18.497 00:29:18.497 Commands Supported and Effects 00:29:18.497 ============================== 00:29:18.497 Admin Commands 00:29:18.497 -------------- 00:29:18.497 Get Log Page (02h): Supported 00:29:18.497 Identify (06h): Supported 00:29:18.497 Abort (08h): Supported 00:29:18.497 Set Features (09h): Supported 00:29:18.497 Get Features (0Ah): Supported 00:29:18.497 Asynchronous Event Request (0Ch): Supported 00:29:18.497 Keep Alive (18h): Supported 00:29:18.497 I/O Commands 00:29:18.497 ------------ 00:29:18.497 Flush (00h): Supported 00:29:18.497 Write (01h): Supported LBA-Change 00:29:18.497 Read (02h): Supported 00:29:18.497 Write Zeroes (08h): Supported LBA-Change 00:29:18.497 Dataset Management (09h): Supported 00:29:18.497 00:29:18.497 Error Log 00:29:18.497 ========= 00:29:18.497 Entry: 0 00:29:18.497 Error Count: 0x3 00:29:18.497 Submission Queue Id: 0x0 00:29:18.497 Command Id: 0x5 00:29:18.497 Phase Bit: 0 00:29:18.497 Status Code: 0x2 00:29:18.497 Status Code Type: 0x0 00:29:18.497 Do Not Retry: 1 00:29:18.757 Error Location: 0x28 00:29:18.757 LBA: 0x0 00:29:18.757 Namespace: 0x0 00:29:18.757 Vendor Log Page: 0x0 00:29:18.757 ----------- 00:29:18.757 Entry: 1 00:29:18.757 Error Count: 0x2 00:29:18.757 Submission Queue Id: 0x0 00:29:18.757 Command Id: 0x5 00:29:18.757 Phase Bit: 0 00:29:18.757 Status Code: 0x2 00:29:18.757 Status Code Type: 0x0 00:29:18.757 Do Not Retry: 1 00:29:18.757 Error Location: 0x28 00:29:18.757 LBA: 0x0 00:29:18.757 Namespace: 0x0 00:29:18.757 Vendor Log Page: 0x0 00:29:18.757 ----------- 00:29:18.757 Entry: 2 00:29:18.757 Error Count: 0x1 00:29:18.757 Submission Queue Id: 0x0 00:29:18.757 Command Id: 0x4 00:29:18.757 Phase Bit: 0 00:29:18.757 Status Code: 0x2 00:29:18.757 Status Code Type: 0x0 00:29:18.757 Do Not Retry: 1 00:29:18.757 Error Location: 0x28 00:29:18.757 LBA: 0x0 00:29:18.757 Namespace: 0x0 00:29:18.757 Vendor Log Page: 0x0 00:29:18.757 00:29:18.757 Number of Queues 00:29:18.757 ================ 00:29:18.757 Number of I/O Submission Queues: 128 00:29:18.757 Number of I/O Completion Queues: 128 00:29:18.757 00:29:18.757 ZNS Specific Controller Data 00:29:18.757 ============================ 00:29:18.757 Zone Append Size Limit: 0 00:29:18.757 00:29:18.757 00:29:18.757 Active Namespaces 00:29:18.757 ================= 00:29:18.757 get_feature(0x05) failed 00:29:18.757 Namespace ID:1 00:29:18.757 Command Set Identifier: NVM (00h) 00:29:18.757 Deallocate: Supported 00:29:18.757 Deallocated/Unwritten Error: Not Supported 00:29:18.757 Deallocated Read Value: Unknown 00:29:18.757 Deallocate in Write Zeroes: Not Supported 00:29:18.757 Deallocated Guard Field: 0xFFFF 00:29:18.757 Flush: Supported 00:29:18.757 Reservation: Not Supported 00:29:18.757 Namespace Sharing Capabilities: Multiple Controllers 00:29:18.757 Size (in LBAs): 3750748848 (1788GiB) 00:29:18.757 Capacity (in LBAs): 3750748848 (1788GiB) 00:29:18.757 Utilization (in LBAs): 3750748848 (1788GiB) 00:29:18.757 UUID: 5ca6a9ed-5609-4c78-8775-8c53a1a0594f 00:29:18.757 Thin Provisioning: Not Supported 00:29:18.757 Per-NS Atomic Units: Yes 00:29:18.757 Atomic Write Unit (Normal): 8 00:29:18.757 Atomic Write Unit (PFail): 8 00:29:18.757 Preferred Write Granularity: 8 00:29:18.757 Atomic Compare & Write Unit: 8 00:29:18.757 Atomic Boundary Size (Normal): 0 00:29:18.757 Atomic Boundary Size (PFail): 0 00:29:18.757 Atomic Boundary Offset: 0 00:29:18.757 NGUID/EUI64 Never Reused: No 00:29:18.757 ANA group ID: 1 00:29:18.757 Namespace Write Protected: No 00:29:18.757 Number of LBA Formats: 1 00:29:18.757 Current LBA Format: LBA Format #00 00:29:18.757 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:18.757 00:29:18.757 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:29:18.757 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:18.757 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:29:18.757 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:18.758 rmmod nvme_tcp 00:29:18.758 rmmod nvme_fabrics 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.758 09:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.668 09:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:20.668 09:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:20.668 09:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:20.668 09:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:29:20.668 09:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:20.668 09:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:20.928 09:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:20.928 09:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:20.928 09:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:20.928 09:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:20.928 09:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:24.229 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:24.229 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:24.229 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:24.229 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:24.229 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:24.229 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:24.490 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:24.490 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:24.490 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:24.490 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:24.490 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:24.490 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:24.490 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:24.490 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:24.490 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:24.490 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:24.490 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:25.064 00:29:25.064 real 0m19.576s 00:29:25.064 user 0m5.306s 00:29:25.064 sys 0m11.250s 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:25.064 ************************************ 00:29:25.064 END TEST nvmf_identify_kernel_target 00:29:25.064 ************************************ 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.064 ************************************ 00:29:25.064 START TEST nvmf_auth_host 00:29:25.064 ************************************ 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:25.064 * Looking for test storage... 00:29:25.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:25.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.064 --rc genhtml_branch_coverage=1 00:29:25.064 --rc genhtml_function_coverage=1 00:29:25.064 --rc genhtml_legend=1 00:29:25.064 --rc geninfo_all_blocks=1 00:29:25.064 --rc geninfo_unexecuted_blocks=1 00:29:25.064 00:29:25.064 ' 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:25.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.064 --rc genhtml_branch_coverage=1 00:29:25.064 --rc genhtml_function_coverage=1 00:29:25.064 --rc genhtml_legend=1 00:29:25.064 --rc geninfo_all_blocks=1 00:29:25.064 --rc geninfo_unexecuted_blocks=1 00:29:25.064 00:29:25.064 ' 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:25.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.064 --rc genhtml_branch_coverage=1 00:29:25.064 --rc genhtml_function_coverage=1 00:29:25.064 --rc genhtml_legend=1 00:29:25.064 --rc geninfo_all_blocks=1 00:29:25.064 --rc geninfo_unexecuted_blocks=1 00:29:25.064 00:29:25.064 ' 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:25.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.064 --rc genhtml_branch_coverage=1 00:29:25.064 --rc genhtml_function_coverage=1 00:29:25.064 --rc genhtml_legend=1 00:29:25.064 --rc geninfo_all_blocks=1 00:29:25.064 --rc geninfo_unexecuted_blocks=1 00:29:25.064 00:29:25.064 ' 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.064 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:25.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:25.326 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:25.327 09:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.467 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:33.468 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:33.468 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:33.468 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:33.468 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.468 09:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:33.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:29:33.468 00:29:33.468 --- 10.0.0.2 ping statistics --- 00:29:33.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.468 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:29:33.468 00:29:33.468 --- 10.0.0.1 ping statistics --- 00:29:33.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.468 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=491446 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 491446 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 491446 ']' 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.468 09:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.730 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.730 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:33.730 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.730 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:33.730 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.730 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.730 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:33.730 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:33.730 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=92b9f252f42036d26d2351c93d89167e 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.8Cz 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 92b9f252f42036d26d2351c93d89167e 0 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 92b9f252f42036d26d2351c93d89167e 0 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=92b9f252f42036d26d2351c93d89167e 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.8Cz 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.8Cz 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.8Cz 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=262c84d3b4d83920ef1e1e5eba1e9298e64f7947d84b6de90687b80c7e504d8b 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.baD 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 262c84d3b4d83920ef1e1e5eba1e9298e64f7947d84b6de90687b80c7e504d8b 3 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 262c84d3b4d83920ef1e1e5eba1e9298e64f7947d84b6de90687b80c7e504d8b 3 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=262c84d3b4d83920ef1e1e5eba1e9298e64f7947d84b6de90687b80c7e504d8b 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.baD 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.baD 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.baD 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e2d9d353c3294270f49bbcc2b026c8e98017b84dba78b420 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.n1k 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e2d9d353c3294270f49bbcc2b026c8e98017b84dba78b420 0 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e2d9d353c3294270f49bbcc2b026c8e98017b84dba78b420 0 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e2d9d353c3294270f49bbcc2b026c8e98017b84dba78b420 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.n1k 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.n1k 00:29:33.731 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.n1k 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=46311e6cc12d3d8553bc97698ae44f0abcbbc3313ceb3dd2 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.OdU 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 46311e6cc12d3d8553bc97698ae44f0abcbbc3313ceb3dd2 2 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 46311e6cc12d3d8553bc97698ae44f0abcbbc3313ceb3dd2 2 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=46311e6cc12d3d8553bc97698ae44f0abcbbc3313ceb3dd2 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.OdU 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.OdU 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.OdU 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b6f253e64ea70ff6a951a9194843deda 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.LGW 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b6f253e64ea70ff6a951a9194843deda 1 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b6f253e64ea70ff6a951a9194843deda 1 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b6f253e64ea70ff6a951a9194843deda 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.LGW 00:29:33.992 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.LGW 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.LGW 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b3059739b03a0388bc4f0806a09fab2f 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.AQ9 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b3059739b03a0388bc4f0806a09fab2f 1 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b3059739b03a0388bc4f0806a09fab2f 1 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b3059739b03a0388bc4f0806a09fab2f 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.AQ9 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.AQ9 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.AQ9 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e2b6e9818ce5c1fd643ceaefa714c79e8021b5b31a4f1f04 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.17h 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e2b6e9818ce5c1fd643ceaefa714c79e8021b5b31a4f1f04 2 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e2b6e9818ce5c1fd643ceaefa714c79e8021b5b31a4f1f04 2 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e2b6e9818ce5c1fd643ceaefa714c79e8021b5b31a4f1f04 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:33.993 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.17h 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.17h 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.17h 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=97dbd9610915300691364b5adcf399b5 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vhD 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 97dbd9610915300691364b5adcf399b5 0 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 97dbd9610915300691364b5adcf399b5 0 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=97dbd9610915300691364b5adcf399b5 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vhD 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vhD 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.vhD 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4cb9db6c0c46f041eef72e431d4be4df12236ff6c8befdfafca15316d0bf747b 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.J0N 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4cb9db6c0c46f041eef72e431d4be4df12236ff6c8befdfafca15316d0bf747b 3 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4cb9db6c0c46f041eef72e431d4be4df12236ff6c8befdfafca15316d0bf747b 3 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4cb9db6c0c46f041eef72e431d4be4df12236ff6c8befdfafca15316d0bf747b 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.J0N 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.J0N 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.J0N 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 491446 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 491446 ']' 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.254 09:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8Cz 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.baD ]] 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.baD 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.n1k 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.OdU ]] 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OdU 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.LGW 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.AQ9 ]] 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AQ9 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.17h 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.vhD ]] 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.vhD 00:29:34.515 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.J0N 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:34.516 09:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:37.815 Waiting for block devices as requested 00:29:38.076 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:38.076 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:38.076 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:38.336 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:38.336 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:38.336 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:38.336 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:38.596 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:38.596 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:38.856 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:38.856 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:38.856 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:38.856 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:39.115 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:39.115 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:39.115 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:39.375 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:40.317 No valid GPT data, bailing 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:29:40.317 00:29:40.317 Discovery Log Number of Records 2, Generation counter 2 00:29:40.317 =====Discovery Log Entry 0====== 00:29:40.317 trtype: tcp 00:29:40.317 adrfam: ipv4 00:29:40.317 subtype: current discovery subsystem 00:29:40.317 treq: not specified, sq flow control disable supported 00:29:40.317 portid: 1 00:29:40.317 trsvcid: 4420 00:29:40.317 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:40.317 traddr: 10.0.0.1 00:29:40.317 eflags: none 00:29:40.317 sectype: none 00:29:40.317 =====Discovery Log Entry 1====== 00:29:40.317 trtype: tcp 00:29:40.317 adrfam: ipv4 00:29:40.317 subtype: nvme subsystem 00:29:40.317 treq: not specified, sq flow control disable supported 00:29:40.317 portid: 1 00:29:40.317 trsvcid: 4420 00:29:40.317 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:40.317 traddr: 10.0.0.1 00:29:40.317 eflags: none 00:29:40.317 sectype: none 00:29:40.317 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:40.318 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:40.318 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:40.318 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:40.318 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.318 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:40.318 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:40.318 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:40.318 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:40.318 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:40.318 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:40.318 09:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.318 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.578 nvme0n1 00:29:40.578 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.578 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.578 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.578 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.578 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.578 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.578 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.578 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.578 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.578 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.578 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.578 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:40.578 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:40.578 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.578 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:40.578 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: ]] 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.579 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.841 nvme0n1 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.841 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.101 nvme0n1 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: ]] 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.101 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.362 nvme0n1 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: ]] 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.362 09:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.622 nvme0n1 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:41.622 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.623 nvme0n1 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.623 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.883 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.883 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.883 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.883 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.883 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.883 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:41.883 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.883 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:41.883 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.883 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:41.883 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:41.883 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:41.883 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:41.883 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:41.883 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:41.883 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: ]] 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.144 nvme0n1 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.144 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.404 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.404 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.404 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.404 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.404 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.405 09:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.405 nvme0n1 00:29:42.405 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.405 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.405 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.405 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.405 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.405 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: ]] 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:42.682 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:42.683 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.683 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.683 nvme0n1 00:29:42.683 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.683 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.683 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.683 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.683 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.683 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: ]] 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.943 nvme0n1 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.943 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:43.203 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:43.204 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.204 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.204 nvme0n1 00:29:43.204 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.204 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.204 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.204 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.204 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.204 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.464 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.464 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.464 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.464 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.464 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.464 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:43.464 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.464 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:43.464 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.464 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:43.464 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:43.464 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:43.464 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:43.464 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:43.464 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:43.464 09:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: ]] 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.034 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.295 nvme0n1 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.295 09:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.556 nvme0n1 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: ]] 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.556 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.557 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.557 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.557 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.557 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.557 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.557 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.557 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.557 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.557 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.557 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.557 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.557 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:44.557 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.557 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.817 nvme0n1 00:29:44.817 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.817 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.817 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.817 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.817 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.817 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: ]] 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.077 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.338 nvme0n1 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.338 09:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.598 nvme0n1 00:29:45.598 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.598 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.598 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.598 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.598 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.599 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.599 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.599 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.599 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.599 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.599 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.599 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:45.599 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.599 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:45.599 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.599 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:45.599 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:45.599 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:45.599 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:45.599 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:45.599 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:45.599 09:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: ]] 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.509 09:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.769 nvme0n1 00:29:47.769 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.769 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.769 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.769 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.769 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.769 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.769 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.769 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.769 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.769 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.769 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.769 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.770 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.340 nvme0n1 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: ]] 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.340 09:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.911 nvme0n1 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: ]] 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.911 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.171 nvme0n1 00:29:49.171 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.171 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.171 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.171 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.171 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.171 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.171 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.171 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.171 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.171 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.171 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.171 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.431 09:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.691 nvme0n1 00:29:49.691 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.691 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.691 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.691 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.691 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.691 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.691 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.691 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: ]] 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.692 09:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.633 nvme0n1 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.633 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.205 nvme0n1 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: ]] 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.205 09:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.776 nvme0n1 00:29:51.776 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: ]] 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:52.037 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.038 09:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.610 nvme0n1 00:29:52.610 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.610 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.610 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.610 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.610 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.610 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:52.611 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:52.612 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:52.612 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:52.612 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.612 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.553 nvme0n1 00:29:53.553 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.553 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.553 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.553 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.553 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.553 09:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: ]] 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.553 nvme0n1 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.553 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.554 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:53.554 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:53.554 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:53.554 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:53.554 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.554 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.554 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:53.554 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.554 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:53.554 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:53.554 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:53.554 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:53.554 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.554 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.815 nvme0n1 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: ]] 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.815 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.076 nvme0n1 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: ]] 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.076 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.336 nvme0n1 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:54.336 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.337 09:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.597 nvme0n1 00:29:54.597 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.597 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.597 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.597 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: ]] 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.598 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.859 nvme0n1 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.859 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.119 nvme0n1 00:29:55.119 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.119 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.119 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.119 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.119 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.119 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.119 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.119 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.119 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.119 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.119 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.119 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.119 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:55.119 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.119 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: ]] 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.120 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.381 nvme0n1 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: ]] 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.381 09:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.642 nvme0n1 00:29:55.642 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.642 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.643 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.904 nvme0n1 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: ]] 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.904 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.165 nvme0n1 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.165 09:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.426 nvme0n1 00:29:56.426 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.426 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.426 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.426 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:56.426 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.426 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: ]] 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.686 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.947 nvme0n1 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: ]] 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.947 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.208 nvme0n1 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.208 09:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.469 nvme0n1 00:29:57.469 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.469 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.469 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:57.469 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.469 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.469 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.469 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.469 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.469 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.469 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: ]] 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.730 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:57.731 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:57.731 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:57.731 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:57.731 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.731 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.731 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:57.731 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.731 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:57.731 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:57.731 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:57.731 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:57.731 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.731 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.991 nvme0n1 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.991 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:58.252 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:58.252 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:58.252 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:58.252 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.252 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.252 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:58.252 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.252 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:58.252 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:58.252 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:58.252 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:58.252 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.252 09:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.513 nvme0n1 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: ]] 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:58.513 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:58.514 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:58.514 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.514 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.084 nvme0n1 00:29:59.084 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.084 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.084 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:59.084 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.084 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.084 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: ]] 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.085 09:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.657 nvme0n1 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:59.657 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:59.658 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:59.658 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.658 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.918 nvme0n1 00:29:59.918 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.918 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.918 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:59.918 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.918 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.918 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: ]] 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.179 09:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.750 nvme0n1 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.750 09:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.693 nvme0n1 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: ]] 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.693 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.265 nvme0n1 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: ]] 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.265 09:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.835 nvme0n1 00:30:02.835 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.835 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.835 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.835 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.835 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.835 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.835 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.835 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.835 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.835 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.095 09:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.665 nvme0n1 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: ]] 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.666 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.927 nvme0n1 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.927 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.188 nvme0n1 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: ]] 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:30:04.188 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.189 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.451 nvme0n1 00:30:04.451 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.451 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.451 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.451 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.451 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.451 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.451 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.451 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.451 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.451 09:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: ]] 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.451 nvme0n1 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.451 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.714 nvme0n1 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.714 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.975 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.975 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.975 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.975 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.975 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.975 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:04.975 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.975 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:30:04.975 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.975 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: ]] 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.976 nvme0n1 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.976 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.237 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.237 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.237 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.237 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.237 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.237 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.237 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:30:05.237 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.238 nvme0n1 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.238 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.500 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.500 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.500 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.500 09:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.500 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.500 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.500 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:30:05.500 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.500 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:05.500 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:05.500 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: ]] 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.501 nvme0n1 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.501 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: ]] 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.762 nvme0n1 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.762 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.023 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.024 nvme0n1 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.024 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: ]] 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.285 09:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.546 nvme0n1 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.546 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.806 nvme0n1 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:06.806 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: ]] 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.807 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.067 nvme0n1 00:30:07.067 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.067 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.067 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.067 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.067 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.067 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.067 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.067 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.067 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.067 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: ]] 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.328 09:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.589 nvme0n1 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.589 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.851 nvme0n1 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: ]] 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.851 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.421 nvme0n1 00:30:08.421 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.421 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.421 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:08.421 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.421 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.421 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.421 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.421 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.421 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.422 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.422 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.422 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:08.422 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:30:08.422 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:08.422 09:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.422 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.993 nvme0n1 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: ]] 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.993 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.255 nvme0n1 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:09.255 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:30:09.517 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:30:09.517 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:09.517 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:09.517 09:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: ]] 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.517 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.778 nvme0n1 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.778 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.350 nvme0n1 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTJiOWYyNTJmNDIwMzZkMjZkMjM1MWM5M2Q4OTE2N2WUKbzD: 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: ]] 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjYyYzg0ZDNiNGQ4MzkyMGVmMWUxZTVlYmExZTkyOThlNjRmNzk0N2Q4NGI2ZGU5MDY4N2I4MGM3ZTUwNGQ4Yqi9AI4=: 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.350 09:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.350 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.350 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:10.350 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.350 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.350 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.350 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.350 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.350 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.350 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.350 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.350 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.350 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.350 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:10.350 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.350 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.920 nvme0n1 00:30:10.921 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.921 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.921 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:10.921 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.921 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.921 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.181 09:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.754 nvme0n1 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: ]] 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.754 09:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.697 nvme0n1 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJiNmU5ODE4Y2U1YzFmZDY0M2NlYWVmYTcxNGM3OWU4MDIxYjViMzFhNGYxZjA0HgRmFA==: 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: ]] 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTdkYmQ5NjEwOTE1MzAwNjkxMzY0YjVhZGNmMzk5YjUG4mu2: 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:12.697 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:12.698 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.698 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.268 nvme0n1 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGNiOWRiNmMwYzQ2ZjA0MWVlZjcyZTQzMWQ0YmU0ZGYxMjIzNmZmNmM4YmVmZGZhZmNhMTUzMTZkMGJmNzQ3Yo2dtNs=: 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.268 09:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.840 nvme0n1 00:30:13.840 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.840 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:13.840 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:13.840 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.840 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.840 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:30:14.100 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.101 request: 00:30:14.101 { 00:30:14.101 "name": "nvme0", 00:30:14.101 "trtype": "tcp", 00:30:14.101 "traddr": "10.0.0.1", 00:30:14.101 "adrfam": "ipv4", 00:30:14.101 "trsvcid": "4420", 00:30:14.101 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:14.101 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:14.101 "prchk_reftag": false, 00:30:14.101 "prchk_guard": false, 00:30:14.101 "hdgst": false, 00:30:14.101 "ddgst": false, 00:30:14.101 "allow_unrecognized_csi": false, 00:30:14.101 "method": "bdev_nvme_attach_controller", 00:30:14.101 "req_id": 1 00:30:14.101 } 00:30:14.101 Got JSON-RPC error response 00:30:14.101 response: 00:30:14.101 { 00:30:14.101 "code": -5, 00:30:14.101 "message": "Input/output error" 00:30:14.101 } 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.101 request: 00:30:14.101 { 00:30:14.101 "name": "nvme0", 00:30:14.101 "trtype": "tcp", 00:30:14.101 "traddr": "10.0.0.1", 00:30:14.101 "adrfam": "ipv4", 00:30:14.101 "trsvcid": "4420", 00:30:14.101 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:14.101 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:14.101 "prchk_reftag": false, 00:30:14.101 "prchk_guard": false, 00:30:14.101 "hdgst": false, 00:30:14.101 "ddgst": false, 00:30:14.101 "dhchap_key": "key2", 00:30:14.101 "allow_unrecognized_csi": false, 00:30:14.101 "method": "bdev_nvme_attach_controller", 00:30:14.101 "req_id": 1 00:30:14.101 } 00:30:14.101 Got JSON-RPC error response 00:30:14.101 response: 00:30:14.101 { 00:30:14.101 "code": -5, 00:30:14.101 "message": "Input/output error" 00:30:14.101 } 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:30:14.101 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.102 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.102 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.362 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:30:14.362 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:30:14.362 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:14.362 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.362 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.362 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.362 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.362 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.363 request: 00:30:14.363 { 00:30:14.363 "name": "nvme0", 00:30:14.363 "trtype": "tcp", 00:30:14.363 "traddr": "10.0.0.1", 00:30:14.363 "adrfam": "ipv4", 00:30:14.363 "trsvcid": "4420", 00:30:14.363 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:14.363 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:14.363 "prchk_reftag": false, 00:30:14.363 "prchk_guard": false, 00:30:14.363 "hdgst": false, 00:30:14.363 "ddgst": false, 00:30:14.363 "dhchap_key": "key1", 00:30:14.363 "dhchap_ctrlr_key": "ckey2", 00:30:14.363 "allow_unrecognized_csi": false, 00:30:14.363 "method": "bdev_nvme_attach_controller", 00:30:14.363 "req_id": 1 00:30:14.363 } 00:30:14.363 Got JSON-RPC error response 00:30:14.363 response: 00:30:14.363 { 00:30:14.363 "code": -5, 00:30:14.363 "message": "Input/output error" 00:30:14.363 } 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.363 09:48:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.363 nvme0n1 00:30:14.363 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.363 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:14.363 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:14.363 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:14.363 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:14.363 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:14.363 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:30:14.363 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:30:14.363 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:14.363 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:14.363 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:30:14.363 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: ]] 00:30:14.363 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:30:14.363 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:14.363 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.363 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.623 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.623 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.623 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:30:14.623 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.623 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.623 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.623 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.623 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:14.623 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:14.623 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:14.623 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:14.623 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:14.624 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:14.624 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:14.624 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:14.624 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.624 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.624 request: 00:30:14.624 { 00:30:14.624 "name": "nvme0", 00:30:14.624 "dhchap_key": "key1", 00:30:14.624 "dhchap_ctrlr_key": "ckey2", 00:30:14.624 "method": "bdev_nvme_set_keys", 00:30:14.624 "req_id": 1 00:30:14.624 } 00:30:14.624 Got JSON-RPC error response 00:30:14.624 response: 00:30:14.624 { 00:30:14.624 "code": -13, 00:30:14.624 "message": "Permission denied" 00:30:14.624 } 00:30:14.624 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:14.624 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:14.624 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:14.624 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:14.624 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:14.624 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.624 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:14.624 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.624 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.624 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.624 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:30:14.624 09:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:30:16.006 09:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.006 09:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:16.006 09:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.006 09:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.006 09:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.006 09:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:30:16.006 09:48:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkOWQzNTNjMzI5NDI3MGY0OWJiY2MyYjAyNmM4ZTk4MDE3Yjg0ZGJhNzhiNDIw0Skh3Q==: 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: ]] 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDYzMTFlNmNjMTJkM2Q4NTUzYmM5NzY5OGFlNDRmMGFiY2JiYzMzMTNjZWIzZGQyr6WS5g==: 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.948 nvme0n1 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZmMjUzZTY0ZWE3MGZmNmE5NTFhOTE5NDg0M2RlZGHLjbPJ: 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: ]] 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjMwNTk3MzliMDNhMDM4OGJjNGYwODA2YTA5ZmFiMmbWKg61: 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.948 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:16.949 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.949 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.949 request: 00:30:16.949 { 00:30:16.949 "name": "nvme0", 00:30:16.949 "dhchap_key": "key2", 00:30:16.949 "dhchap_ctrlr_key": "ckey1", 00:30:16.949 "method": "bdev_nvme_set_keys", 00:30:16.949 "req_id": 1 00:30:16.949 } 00:30:16.949 Got JSON-RPC error response 00:30:16.949 response: 00:30:16.949 { 00:30:16.949 "code": -13, 00:30:16.949 "message": "Permission denied" 00:30:16.949 } 00:30:16.949 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:16.949 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:16.949 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:16.949 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:16.949 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:16.949 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.949 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:16.949 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.949 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.949 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.209 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:30:17.209 09:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:18.148 rmmod nvme_tcp 00:30:18.148 rmmod nvme_fabrics 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 491446 ']' 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 491446 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 491446 ']' 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 491446 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 491446 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 491446' 00:30:18.148 killing process with pid 491446 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 491446 00:30:18.148 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 491446 00:30:18.409 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:18.409 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:18.409 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:18.409 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:30:18.409 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:30:18.409 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:18.409 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:30:18.409 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:18.409 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:18.409 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.409 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.409 09:48:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.320 09:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:20.320 09:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:20.320 09:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:20.320 09:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:30:20.320 09:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:30:20.320 09:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:30:20.320 09:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:20.320 09:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:20.320 09:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:20.580 09:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:20.580 09:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:30:20.580 09:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:30:20.580 09:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:23.885 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:23.885 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:23.885 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:23.885 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:23.885 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:24.146 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:24.146 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:24.146 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:24.146 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:24.146 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:24.146 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:24.146 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:24.146 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:24.146 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:24.146 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:24.146 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:24.146 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:30:24.718 09:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.8Cz /tmp/spdk.key-null.n1k /tmp/spdk.key-sha256.LGW /tmp/spdk.key-sha384.17h /tmp/spdk.key-sha512.J0N /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:30:24.718 09:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:28.021 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:28.021 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:28.021 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:28.021 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:28.021 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:28.021 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:28.021 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:28.021 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:28.021 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:28.021 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:28.021 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:28.021 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:28.021 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:28.021 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:28.021 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:28.021 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:28.021 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:28.283 00:30:28.283 real 1m3.408s 00:30:28.283 user 0m57.243s 00:30:28.283 sys 0m16.004s 00:30:28.283 09:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:28.283 09:48:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.283 ************************************ 00:30:28.283 END TEST nvmf_auth_host 00:30:28.283 ************************************ 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.545 ************************************ 00:30:28.545 START TEST nvmf_digest 00:30:28.545 ************************************ 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:28.545 * Looking for test storage... 00:30:28.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:28.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.545 --rc genhtml_branch_coverage=1 00:30:28.545 --rc genhtml_function_coverage=1 00:30:28.545 --rc genhtml_legend=1 00:30:28.545 --rc geninfo_all_blocks=1 00:30:28.545 --rc geninfo_unexecuted_blocks=1 00:30:28.545 00:30:28.545 ' 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:28.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.545 --rc genhtml_branch_coverage=1 00:30:28.545 --rc genhtml_function_coverage=1 00:30:28.545 --rc genhtml_legend=1 00:30:28.545 --rc geninfo_all_blocks=1 00:30:28.545 --rc geninfo_unexecuted_blocks=1 00:30:28.545 00:30:28.545 ' 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:28.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.545 --rc genhtml_branch_coverage=1 00:30:28.545 --rc genhtml_function_coverage=1 00:30:28.545 --rc genhtml_legend=1 00:30:28.545 --rc geninfo_all_blocks=1 00:30:28.545 --rc geninfo_unexecuted_blocks=1 00:30:28.545 00:30:28.545 ' 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:28.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.545 --rc genhtml_branch_coverage=1 00:30:28.545 --rc genhtml_function_coverage=1 00:30:28.545 --rc genhtml_legend=1 00:30:28.545 --rc geninfo_all_blocks=1 00:30:28.545 --rc geninfo_unexecuted_blocks=1 00:30:28.545 00:30:28.545 ' 00:30:28.545 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:28.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:30:28.808 09:48:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:36.953 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:36.953 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:36.953 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:36.954 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:36.954 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:36.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:36.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:30:36.954 00:30:36.954 --- 10.0.0.2 ping statistics --- 00:30:36.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.954 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:36.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:36.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:30:36.954 00:30:36.954 --- 10.0.0.1 ping statistics --- 00:30:36.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.954 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:36.954 ************************************ 00:30:36.954 START TEST nvmf_digest_clean 00:30:36.954 ************************************ 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=509519 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 509519 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 509519 ']' 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:36.954 09:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:36.954 [2024-11-19 09:48:22.859014] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:36.954 [2024-11-19 09:48:22.859079] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:36.954 [2024-11-19 09:48:22.959722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.954 [2024-11-19 09:48:23.010658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:36.954 [2024-11-19 09:48:23.010709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:36.954 [2024-11-19 09:48:23.010717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:36.954 [2024-11-19 09:48:23.010724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:36.954 [2024-11-19 09:48:23.010731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:36.954 [2024-11-19 09:48:23.011466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.954 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:36.954 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:36.954 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:36.954 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:36.954 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:37.216 null0 00:30:37.216 [2024-11-19 09:48:23.830778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:37.216 [2024-11-19 09:48:23.855063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=509720 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 509720 /var/tmp/bperf.sock 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 509720 ']' 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:37.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:37.216 09:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:37.216 [2024-11-19 09:48:23.915440] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:37.216 [2024-11-19 09:48:23.915504] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid509720 ] 00:30:37.479 [2024-11-19 09:48:24.007181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.479 [2024-11-19 09:48:24.059994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.048 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:38.048 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:38.048 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:38.048 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:38.048 09:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:38.308 09:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:38.308 09:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:38.878 nvme0n1 00:30:38.878 09:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:38.878 09:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:38.878 Running I/O for 2 seconds... 00:30:41.201 21309.00 IOPS, 83.24 MiB/s [2024-11-19T08:48:27.949Z] 21211.00 IOPS, 82.86 MiB/s 00:30:41.201 Latency(us) 00:30:41.201 [2024-11-19T08:48:27.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.201 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:41.201 nvme0n1 : 2.04 20814.97 81.31 0.00 0.00 6027.53 3167.57 45001.39 00:30:41.201 [2024-11-19T08:48:27.949Z] =================================================================================================================== 00:30:41.201 [2024-11-19T08:48:27.949Z] Total : 20814.97 81.31 0.00 0.00 6027.53 3167.57 45001.39 00:30:41.201 { 00:30:41.201 "results": [ 00:30:41.201 { 00:30:41.201 "job": "nvme0n1", 00:30:41.201 "core_mask": "0x2", 00:30:41.201 "workload": "randread", 00:30:41.201 "status": "finished", 00:30:41.201 "queue_depth": 128, 00:30:41.201 "io_size": 4096, 00:30:41.201 "runtime": 2.044202, 00:30:41.201 "iops": 20814.968383750725, 00:30:41.201 "mibps": 81.30847024902627, 00:30:41.201 "io_failed": 0, 00:30:41.201 "io_timeout": 0, 00:30:41.201 "avg_latency_us": 6027.5311304347815, 00:30:41.201 "min_latency_us": 3167.5733333333333, 00:30:41.201 "max_latency_us": 45001.386666666665 00:30:41.201 } 00:30:41.201 ], 00:30:41.201 "core_count": 1 00:30:41.201 } 00:30:41.201 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:41.201 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:41.201 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:41.201 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:41.201 | select(.opcode=="crc32c") 00:30:41.201 | "\(.module_name) \(.executed)"' 00:30:41.201 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:41.201 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:41.201 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:41.201 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:41.201 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:41.201 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 509720 00:30:41.201 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 509720 ']' 00:30:41.201 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 509720 00:30:41.201 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:41.201 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:41.201 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 509720 00:30:41.201 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:41.202 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:41.202 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 509720' 00:30:41.202 killing process with pid 509720 00:30:41.202 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 509720 00:30:41.202 Received shutdown signal, test time was about 2.000000 seconds 00:30:41.202 00:30:41.202 Latency(us) 00:30:41.202 [2024-11-19T08:48:27.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.202 [2024-11-19T08:48:27.950Z] =================================================================================================================== 00:30:41.202 [2024-11-19T08:48:27.950Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:41.202 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 509720 00:30:41.462 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:41.462 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:41.462 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:41.462 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:41.462 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:41.462 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:41.462 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:41.462 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=510553 00:30:41.462 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 510553 /var/tmp/bperf.sock 00:30:41.462 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 510553 ']' 00:30:41.462 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:41.462 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:41.462 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:41.462 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:41.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:41.462 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:41.462 09:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:41.462 [2024-11-19 09:48:28.002134] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:41.462 [2024-11-19 09:48:28.002195] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510553 ] 00:30:41.462 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:41.462 Zero copy mechanism will not be used. 00:30:41.462 [2024-11-19 09:48:28.090266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.462 [2024-11-19 09:48:28.125652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.403 09:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:42.403 09:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:42.403 09:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:42.403 09:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:42.403 09:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:42.403 09:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:42.403 09:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:42.677 nvme0n1 00:30:42.677 09:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:42.677 09:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:42.677 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:42.677 Zero copy mechanism will not be used. 00:30:42.677 Running I/O for 2 seconds... 00:30:45.002 3027.00 IOPS, 378.38 MiB/s [2024-11-19T08:48:31.750Z] 3136.00 IOPS, 392.00 MiB/s 00:30:45.002 Latency(us) 00:30:45.002 [2024-11-19T08:48:31.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:45.002 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:45.002 nvme0n1 : 2.00 3139.85 392.48 0.00 0.00 5093.36 901.12 7536.64 00:30:45.002 [2024-11-19T08:48:31.750Z] =================================================================================================================== 00:30:45.002 [2024-11-19T08:48:31.750Z] Total : 3139.85 392.48 0.00 0.00 5093.36 901.12 7536.64 00:30:45.002 { 00:30:45.002 "results": [ 00:30:45.002 { 00:30:45.002 "job": "nvme0n1", 00:30:45.002 "core_mask": "0x2", 00:30:45.002 "workload": "randread", 00:30:45.002 "status": "finished", 00:30:45.002 "queue_depth": 16, 00:30:45.002 "io_size": 131072, 00:30:45.002 "runtime": 2.002646, 00:30:45.002 "iops": 3139.845983763481, 00:30:45.002 "mibps": 392.4807479704351, 00:30:45.002 "io_failed": 0, 00:30:45.002 "io_timeout": 0, 00:30:45.002 "avg_latency_us": 5093.356132315522, 00:30:45.002 "min_latency_us": 901.12, 00:30:45.002 "max_latency_us": 7536.64 00:30:45.002 } 00:30:45.002 ], 00:30:45.002 "core_count": 1 00:30:45.002 } 00:30:45.002 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:45.002 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:45.002 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:45.002 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:45.002 | select(.opcode=="crc32c") 00:30:45.002 | "\(.module_name) \(.executed)"' 00:30:45.003 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:45.003 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:45.003 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:45.003 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:45.003 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:45.003 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 510553 00:30:45.003 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 510553 ']' 00:30:45.003 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 510553 00:30:45.003 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:45.003 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:45.003 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 510553 00:30:45.003 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:45.003 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:45.003 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 510553' 00:30:45.003 killing process with pid 510553 00:30:45.003 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 510553 00:30:45.003 Received shutdown signal, test time was about 2.000000 seconds 00:30:45.003 00:30:45.003 Latency(us) 00:30:45.003 [2024-11-19T08:48:31.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:45.003 [2024-11-19T08:48:31.751Z] =================================================================================================================== 00:30:45.003 [2024-11-19T08:48:31.751Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:45.003 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 510553 00:30:45.263 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:45.263 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:45.263 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:45.263 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:45.263 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:45.263 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:45.263 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:45.263 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=511239 00:30:45.263 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 511239 /var/tmp/bperf.sock 00:30:45.263 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 511239 ']' 00:30:45.263 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:45.263 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:45.263 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.263 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:45.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:45.263 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.263 09:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:45.263 [2024-11-19 09:48:31.853936] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:45.264 [2024-11-19 09:48:31.853991] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid511239 ] 00:30:45.264 [2024-11-19 09:48:31.937239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.264 [2024-11-19 09:48:31.966537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.204 09:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:46.204 09:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:46.204 09:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:46.204 09:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:46.204 09:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:46.205 09:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:46.205 09:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:46.464 nvme0n1 00:30:46.464 09:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:46.464 09:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:46.464 Running I/O for 2 seconds... 00:30:48.781 29143.00 IOPS, 113.84 MiB/s [2024-11-19T08:48:35.529Z] 29331.50 IOPS, 114.58 MiB/s 00:30:48.781 Latency(us) 00:30:48.781 [2024-11-19T08:48:35.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.781 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:48.781 nvme0n1 : 2.00 29332.17 114.58 0.00 0.00 4356.86 3181.23 14745.60 00:30:48.781 [2024-11-19T08:48:35.529Z] =================================================================================================================== 00:30:48.781 [2024-11-19T08:48:35.529Z] Total : 29332.17 114.58 0.00 0.00 4356.86 3181.23 14745.60 00:30:48.781 { 00:30:48.781 "results": [ 00:30:48.781 { 00:30:48.781 "job": "nvme0n1", 00:30:48.781 "core_mask": "0x2", 00:30:48.781 "workload": "randwrite", 00:30:48.781 "status": "finished", 00:30:48.781 "queue_depth": 128, 00:30:48.781 "io_size": 4096, 00:30:48.781 "runtime": 2.004318, 00:30:48.781 "iops": 29332.171840995292, 00:30:48.781 "mibps": 114.57879625388786, 00:30:48.781 "io_failed": 0, 00:30:48.781 "io_timeout": 0, 00:30:48.781 "avg_latency_us": 4356.861551371242, 00:30:48.781 "min_latency_us": 3181.2266666666665, 00:30:48.781 "max_latency_us": 14745.6 00:30:48.781 } 00:30:48.781 ], 00:30:48.781 "core_count": 1 00:30:48.781 } 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:48.781 | select(.opcode=="crc32c") 00:30:48.781 | "\(.module_name) \(.executed)"' 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 511239 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 511239 ']' 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 511239 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 511239 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 511239' 00:30:48.781 killing process with pid 511239 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 511239 00:30:48.781 Received shutdown signal, test time was about 2.000000 seconds 00:30:48.781 00:30:48.781 Latency(us) 00:30:48.781 [2024-11-19T08:48:35.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.781 [2024-11-19T08:48:35.529Z] =================================================================================================================== 00:30:48.781 [2024-11-19T08:48:35.529Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:48.781 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 511239 00:30:49.040 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:49.040 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:49.040 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:49.040 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:49.040 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:49.040 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:49.040 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:49.041 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=511925 00:30:49.041 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 511925 /var/tmp/bperf.sock 00:30:49.041 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 511925 ']' 00:30:49.041 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:49.041 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:49.041 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:49.041 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:49.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:49.041 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:49.041 09:48:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:49.041 [2024-11-19 09:48:35.651924] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:49.041 [2024-11-19 09:48:35.651979] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid511925 ] 00:30:49.041 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:49.041 Zero copy mechanism will not be used. 00:30:49.041 [2024-11-19 09:48:35.733245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.041 [2024-11-19 09:48:35.762524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.980 09:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.980 09:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:49.980 09:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:49.980 09:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:49.980 09:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:49.980 09:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:49.980 09:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:50.240 nvme0n1 00:30:50.241 09:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:50.241 09:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:50.500 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:50.500 Zero copy mechanism will not be used. 00:30:50.500 Running I/O for 2 seconds... 00:30:52.381 3290.00 IOPS, 411.25 MiB/s [2024-11-19T08:48:39.129Z] 4255.50 IOPS, 531.94 MiB/s 00:30:52.381 Latency(us) 00:30:52.381 [2024-11-19T08:48:39.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.381 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:52.381 nvme0n1 : 2.00 4257.25 532.16 0.00 0.00 3754.29 1529.17 7263.57 00:30:52.381 [2024-11-19T08:48:39.129Z] =================================================================================================================== 00:30:52.381 [2024-11-19T08:48:39.129Z] Total : 4257.25 532.16 0.00 0.00 3754.29 1529.17 7263.57 00:30:52.381 { 00:30:52.381 "results": [ 00:30:52.381 { 00:30:52.381 "job": "nvme0n1", 00:30:52.381 "core_mask": "0x2", 00:30:52.381 "workload": "randwrite", 00:30:52.382 "status": "finished", 00:30:52.382 "queue_depth": 16, 00:30:52.382 "io_size": 131072, 00:30:52.382 "runtime": 2.00364, 00:30:52.382 "iops": 4257.251801720868, 00:30:52.382 "mibps": 532.1564752151085, 00:30:52.382 "io_failed": 0, 00:30:52.382 "io_timeout": 0, 00:30:52.382 "avg_latency_us": 3754.29372098476, 00:30:52.382 "min_latency_us": 1529.1733333333334, 00:30:52.382 "max_latency_us": 7263.573333333334 00:30:52.382 } 00:30:52.382 ], 00:30:52.382 "core_count": 1 00:30:52.382 } 00:30:52.382 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:52.382 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:52.382 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:52.382 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:52.382 | select(.opcode=="crc32c") 00:30:52.382 | "\(.module_name) \(.executed)"' 00:30:52.382 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:52.649 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:52.650 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:52.650 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:52.650 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:52.650 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 511925 00:30:52.650 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 511925 ']' 00:30:52.650 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 511925 00:30:52.650 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:52.650 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:52.650 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 511925 00:30:52.650 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:52.650 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:52.650 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 511925' 00:30:52.650 killing process with pid 511925 00:30:52.650 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 511925 00:30:52.650 Received shutdown signal, test time was about 2.000000 seconds 00:30:52.650 00:30:52.650 Latency(us) 00:30:52.650 [2024-11-19T08:48:39.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.650 [2024-11-19T08:48:39.398Z] =================================================================================================================== 00:30:52.650 [2024-11-19T08:48:39.398Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:52.650 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 511925 00:30:52.913 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 509519 00:30:52.913 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 509519 ']' 00:30:52.913 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 509519 00:30:52.913 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:52.913 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:52.913 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 509519 00:30:52.913 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:52.913 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:52.913 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 509519' 00:30:52.913 killing process with pid 509519 00:30:52.913 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 509519 00:30:52.913 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 509519 00:30:52.913 00:30:52.913 real 0m16.807s 00:30:52.913 user 0m33.336s 00:30:52.913 sys 0m3.684s 00:30:52.913 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:52.913 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:52.913 ************************************ 00:30:52.913 END TEST nvmf_digest_clean 00:30:52.913 ************************************ 00:30:52.913 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:52.913 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:52.913 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:52.913 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:53.173 ************************************ 00:30:53.173 START TEST nvmf_digest_error 00:30:53.173 ************************************ 00:30:53.173 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:30:53.173 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:53.173 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:53.173 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:53.173 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:53.173 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=512806 00:30:53.173 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 512806 00:30:53.173 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:53.173 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 512806 ']' 00:30:53.173 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.173 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:53.173 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.173 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:53.173 09:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:53.173 [2024-11-19 09:48:39.744688] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:53.173 [2024-11-19 09:48:39.744739] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:53.173 [2024-11-19 09:48:39.836833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.173 [2024-11-19 09:48:39.873698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:53.173 [2024-11-19 09:48:39.873740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:53.173 [2024-11-19 09:48:39.873745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:53.173 [2024-11-19 09:48:39.873750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:53.173 [2024-11-19 09:48:39.873754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:53.173 [2024-11-19 09:48:39.874337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.115 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:54.115 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:54.115 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:54.115 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:54.115 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:54.115 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:54.115 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:54.115 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.115 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:54.115 [2024-11-19 09:48:40.596341] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:54.115 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.115 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:54.115 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:54.115 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.115 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:54.115 null0 00:30:54.115 [2024-11-19 09:48:40.673618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:54.116 [2024-11-19 09:48:40.697798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:54.116 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.116 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:54.116 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:54.116 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:54.116 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:54.116 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:54.116 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=512987 00:30:54.116 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 512987 /var/tmp/bperf.sock 00:30:54.116 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 512987 ']' 00:30:54.116 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:54.116 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:54.116 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:54.116 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:54.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:54.116 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:54.116 09:48:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:54.116 [2024-11-19 09:48:40.752431] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:54.116 [2024-11-19 09:48:40.752481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid512987 ] 00:30:54.116 [2024-11-19 09:48:40.836447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.376 [2024-11-19 09:48:40.865938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.946 09:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:54.946 09:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:54.946 09:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:54.946 09:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:55.207 09:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:55.207 09:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.207 09:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:55.207 09:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.207 09:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:55.207 09:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:55.467 nvme0n1 00:30:55.467 09:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:55.467 09:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.467 09:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:55.467 09:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.467 09:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:55.468 09:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:55.468 Running I/O for 2 seconds... 00:30:55.468 [2024-11-19 09:48:42.191373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.468 [2024-11-19 09:48:42.191405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.468 [2024-11-19 09:48:42.191414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.468 [2024-11-19 09:48:42.203215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.468 [2024-11-19 09:48:42.203235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.468 [2024-11-19 09:48:42.203246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.728 [2024-11-19 09:48:42.212564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.728 [2024-11-19 09:48:42.212583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.728 [2024-11-19 09:48:42.212590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.728 [2024-11-19 09:48:42.221283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.728 [2024-11-19 09:48:42.221300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.728 [2024-11-19 09:48:42.221307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.728 [2024-11-19 09:48:42.230008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.728 [2024-11-19 09:48:42.230026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.728 [2024-11-19 09:48:42.230033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.728 [2024-11-19 09:48:42.239296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.728 [2024-11-19 09:48:42.239313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.728 [2024-11-19 09:48:42.239320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.728 [2024-11-19 09:48:42.248815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.728 [2024-11-19 09:48:42.248831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.728 [2024-11-19 09:48:42.248838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.728 [2024-11-19 09:48:42.256486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.728 [2024-11-19 09:48:42.256503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.728 [2024-11-19 09:48:42.256509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.728 [2024-11-19 09:48:42.266751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.728 [2024-11-19 09:48:42.266768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.728 [2024-11-19 09:48:42.266774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.728 [2024-11-19 09:48:42.275619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.728 [2024-11-19 09:48:42.275636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.275642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.285224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.285244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.285251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.294877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.294893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.294899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.304045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.304061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.304068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.311649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.311665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.311671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.320506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.320523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.320529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.329757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.329774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.329780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.338594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.338610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.338617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.347026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.347042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.347048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.356242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.356258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.356265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.364750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.364766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.364773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.373994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.374011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.374017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.383439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.383455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.383461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.393738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.393754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.393760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.401127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.401144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.401150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.412398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.412415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.412421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.422438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.422455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.422462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.430617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.430634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.430640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.439445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.439462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.439471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.448679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.448695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.448702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.456981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.456998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.729 [2024-11-19 09:48:42.457004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.729 [2024-11-19 09:48:42.465738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.729 [2024-11-19 09:48:42.465754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.730 [2024-11-19 09:48:42.465760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.991 [2024-11-19 09:48:42.475334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.991 [2024-11-19 09:48:42.475350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.991 [2024-11-19 09:48:42.475357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.991 [2024-11-19 09:48:42.483733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.991 [2024-11-19 09:48:42.483750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.991 [2024-11-19 09:48:42.483757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.991 [2024-11-19 09:48:42.492227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.991 [2024-11-19 09:48:42.492244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.991 [2024-11-19 09:48:42.492250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.991 [2024-11-19 09:48:42.502349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.991 [2024-11-19 09:48:42.502366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.991 [2024-11-19 09:48:42.502372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.991 [2024-11-19 09:48:42.511526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.991 [2024-11-19 09:48:42.511543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.991 [2024-11-19 09:48:42.511549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.991 [2024-11-19 09:48:42.519533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.991 [2024-11-19 09:48:42.519553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.991 [2024-11-19 09:48:42.519560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.991 [2024-11-19 09:48:42.528831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.991 [2024-11-19 09:48:42.528848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.991 [2024-11-19 09:48:42.528854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.991 [2024-11-19 09:48:42.538021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.991 [2024-11-19 09:48:42.538037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.991 [2024-11-19 09:48:42.538043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.991 [2024-11-19 09:48:42.546538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.991 [2024-11-19 09:48:42.546554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.991 [2024-11-19 09:48:42.546561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.991 [2024-11-19 09:48:42.555424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.991 [2024-11-19 09:48:42.555440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.991 [2024-11-19 09:48:42.555446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.991 [2024-11-19 09:48:42.564802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.991 [2024-11-19 09:48:42.564819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.991 [2024-11-19 09:48:42.564825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.991 [2024-11-19 09:48:42.573794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.991 [2024-11-19 09:48:42.573810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.991 [2024-11-19 09:48:42.573816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.991 [2024-11-19 09:48:42.582096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.992 [2024-11-19 09:48:42.582112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.992 [2024-11-19 09:48:42.582118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.992 [2024-11-19 09:48:42.591263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.992 [2024-11-19 09:48:42.591280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.992 [2024-11-19 09:48:42.591289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.992 [2024-11-19 09:48:42.599440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.992 [2024-11-19 09:48:42.599456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.992 [2024-11-19 09:48:42.599462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.992 [2024-11-19 09:48:42.608782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.992 [2024-11-19 09:48:42.608798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.992 [2024-11-19 09:48:42.608804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.992 [2024-11-19 09:48:42.618345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.992 [2024-11-19 09:48:42.618362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.992 [2024-11-19 09:48:42.618368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.992 [2024-11-19 09:48:42.626587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.992 [2024-11-19 09:48:42.626603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.992 [2024-11-19 09:48:42.626609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.992 [2024-11-19 09:48:42.635339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.992 [2024-11-19 09:48:42.635355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.992 [2024-11-19 09:48:42.635362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.992 [2024-11-19 09:48:42.644594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.992 [2024-11-19 09:48:42.644611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.992 [2024-11-19 09:48:42.644617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.992 [2024-11-19 09:48:42.654419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.992 [2024-11-19 09:48:42.654436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.992 [2024-11-19 09:48:42.654443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.992 [2024-11-19 09:48:42.662443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.992 [2024-11-19 09:48:42.662459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.992 [2024-11-19 09:48:42.662465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.992 [2024-11-19 09:48:42.671302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.992 [2024-11-19 09:48:42.671322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.992 [2024-11-19 09:48:42.671328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.992 [2024-11-19 09:48:42.680647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.992 [2024-11-19 09:48:42.680664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.992 [2024-11-19 09:48:42.680670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.992 [2024-11-19 09:48:42.690387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.992 [2024-11-19 09:48:42.690404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.992 [2024-11-19 09:48:42.690410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.992 [2024-11-19 09:48:42.698473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.992 [2024-11-19 09:48:42.698490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.992 [2024-11-19 09:48:42.698496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.992 [2024-11-19 09:48:42.707995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.992 [2024-11-19 09:48:42.708011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.992 [2024-11-19 09:48:42.708017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.992 [2024-11-19 09:48:42.718406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.992 [2024-11-19 09:48:42.718423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.992 [2024-11-19 09:48:42.718429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:55.992 [2024-11-19 09:48:42.726781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:55.992 [2024-11-19 09:48:42.726798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.992 [2024-11-19 09:48:42.726804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.254 [2024-11-19 09:48:42.736216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.254 [2024-11-19 09:48:42.736234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.254 [2024-11-19 09:48:42.736240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.254 [2024-11-19 09:48:42.748129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.254 [2024-11-19 09:48:42.748145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.254 [2024-11-19 09:48:42.748152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.254 [2024-11-19 09:48:42.760069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.254 [2024-11-19 09:48:42.760086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.254 [2024-11-19 09:48:42.760092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.254 [2024-11-19 09:48:42.771016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.254 [2024-11-19 09:48:42.771033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.771039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.779847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.779864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.779870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.788246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.788262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.788268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.797017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.797034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.797040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.806524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.806541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.806548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.814082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.814099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.814105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.824167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.824185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.824191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.833642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.833660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.833669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.842868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.842885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.842892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.850860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.850876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.850883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.860157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.860178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.860185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.869551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.869567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.869573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.878353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.878369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.878375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.886716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.886732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.886739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.895847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.895864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.895870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.905023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.905039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.905045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.913590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.913610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.913617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.922489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.922506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.922512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.931207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.931223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.931229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.939959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.939976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.939982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.949146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.949166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.949172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.959993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.960010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.960016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.968441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.255 [2024-11-19 09:48:42.968458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.255 [2024-11-19 09:48:42.968464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.255 [2024-11-19 09:48:42.976914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.256 [2024-11-19 09:48:42.976931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.256 [2024-11-19 09:48:42.976936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.256 [2024-11-19 09:48:42.986230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.256 [2024-11-19 09:48:42.986246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.256 [2024-11-19 09:48:42.986252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.256 [2024-11-19 09:48:42.996667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.256 [2024-11-19 09:48:42.996684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.256 [2024-11-19 09:48:42.996690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.517 [2024-11-19 09:48:43.004999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.517 [2024-11-19 09:48:43.005016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.517 [2024-11-19 09:48:43.005022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.517 [2024-11-19 09:48:43.016619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.517 [2024-11-19 09:48:43.016636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.517 [2024-11-19 09:48:43.016643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.517 [2024-11-19 09:48:43.025069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.517 [2024-11-19 09:48:43.025086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.517 [2024-11-19 09:48:43.025092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.517 [2024-11-19 09:48:43.033435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.517 [2024-11-19 09:48:43.033453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.517 [2024-11-19 09:48:43.033459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.517 [2024-11-19 09:48:43.042953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.517 [2024-11-19 09:48:43.042970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.517 [2024-11-19 09:48:43.042976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.517 [2024-11-19 09:48:43.051615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.517 [2024-11-19 09:48:43.051632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.517 [2024-11-19 09:48:43.051638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.517 [2024-11-19 09:48:43.060701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.517 [2024-11-19 09:48:43.060717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.517 [2024-11-19 09:48:43.060724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.517 [2024-11-19 09:48:43.069258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.517 [2024-11-19 09:48:43.069278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.517 [2024-11-19 09:48:43.069285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.077793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.077810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.077816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.086639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.086657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.086663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.095903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.095919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.095926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.105533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.105550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.105556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.113352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.113368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.113374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.122419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.122436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.122442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.132153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.132175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.132181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.139901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.139918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.139925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.149621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.149638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.149644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.161941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.161958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.161964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.172402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.172419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.172425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 27602.00 IOPS, 107.82 MiB/s [2024-11-19T08:48:43.266Z] [2024-11-19 09:48:43.182083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.182100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.182107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.191171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.191187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.191193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.198998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.199015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.199022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.208660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.208676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.208682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.217563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.217579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.217586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.225910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.225926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.225936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.235512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.235529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.235535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.243803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.243820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.243826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.252671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.252687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.252693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.518 [2024-11-19 09:48:43.261403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.518 [2024-11-19 09:48:43.261420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.518 [2024-11-19 09:48:43.261427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.781 [2024-11-19 09:48:43.270940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.781 [2024-11-19 09:48:43.270957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.781 [2024-11-19 09:48:43.270963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.781 [2024-11-19 09:48:43.279418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.781 [2024-11-19 09:48:43.279435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.781 [2024-11-19 09:48:43.279442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.781 [2024-11-19 09:48:43.288640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.781 [2024-11-19 09:48:43.288657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.781 [2024-11-19 09:48:43.288663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.781 [2024-11-19 09:48:43.297811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.781 [2024-11-19 09:48:43.297827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.781 [2024-11-19 09:48:43.297834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.781 [2024-11-19 09:48:43.306007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.781 [2024-11-19 09:48:43.306030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.781 [2024-11-19 09:48:43.306036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.781 [2024-11-19 09:48:43.315604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.781 [2024-11-19 09:48:43.315621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.781 [2024-11-19 09:48:43.315627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.781 [2024-11-19 09:48:43.326659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.781 [2024-11-19 09:48:43.326675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.781 [2024-11-19 09:48:43.326681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.781 [2024-11-19 09:48:43.335179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.781 [2024-11-19 09:48:43.335196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.781 [2024-11-19 09:48:43.335202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.781 [2024-11-19 09:48:43.345009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.781 [2024-11-19 09:48:43.345026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.781 [2024-11-19 09:48:43.345032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.781 [2024-11-19 09:48:43.353573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.781 [2024-11-19 09:48:43.353589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.781 [2024-11-19 09:48:43.353595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.781 [2024-11-19 09:48:43.362497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.781 [2024-11-19 09:48:43.362514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.781 [2024-11-19 09:48:43.362520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.781 [2024-11-19 09:48:43.371225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.781 [2024-11-19 09:48:43.371242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.781 [2024-11-19 09:48:43.371248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.781 [2024-11-19 09:48:43.380509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.781 [2024-11-19 09:48:43.380526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.781 [2024-11-19 09:48:43.380532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.781 [2024-11-19 09:48:43.389641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.781 [2024-11-19 09:48:43.389657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.781 [2024-11-19 09:48:43.389663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.782 [2024-11-19 09:48:43.398419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.782 [2024-11-19 09:48:43.398435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.782 [2024-11-19 09:48:43.398442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.782 [2024-11-19 09:48:43.406982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.782 [2024-11-19 09:48:43.406999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.782 [2024-11-19 09:48:43.407005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.782 [2024-11-19 09:48:43.414980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.782 [2024-11-19 09:48:43.414997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.782 [2024-11-19 09:48:43.415003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.782 [2024-11-19 09:48:43.424150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.782 [2024-11-19 09:48:43.424172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.782 [2024-11-19 09:48:43.424178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.782 [2024-11-19 09:48:43.435071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.782 [2024-11-19 09:48:43.435088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.782 [2024-11-19 09:48:43.435095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.782 [2024-11-19 09:48:43.443164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.782 [2024-11-19 09:48:43.443181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.782 [2024-11-19 09:48:43.443187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.782 [2024-11-19 09:48:43.451402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.782 [2024-11-19 09:48:43.451419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.782 [2024-11-19 09:48:43.451425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.782 [2024-11-19 09:48:43.461291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.782 [2024-11-19 09:48:43.461311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.782 [2024-11-19 09:48:43.461318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.782 [2024-11-19 09:48:43.470596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.782 [2024-11-19 09:48:43.470612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.782 [2024-11-19 09:48:43.470618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.782 [2024-11-19 09:48:43.478574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.782 [2024-11-19 09:48:43.478591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.782 [2024-11-19 09:48:43.478596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.782 [2024-11-19 09:48:43.487719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.782 [2024-11-19 09:48:43.487736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.782 [2024-11-19 09:48:43.487742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.782 [2024-11-19 09:48:43.497373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.782 [2024-11-19 09:48:43.497390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.782 [2024-11-19 09:48:43.497396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.782 [2024-11-19 09:48:43.505930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.782 [2024-11-19 09:48:43.505946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.782 [2024-11-19 09:48:43.505952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.782 [2024-11-19 09:48:43.513614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.782 [2024-11-19 09:48:43.513631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.782 [2024-11-19 09:48:43.513637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.782 [2024-11-19 09:48:43.523339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:56.782 [2024-11-19 09:48:43.523355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.782 [2024-11-19 09:48:43.523362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.531736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.531753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.531759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.540769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.540785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.540791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.549263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.549279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.549285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.558648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.558665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.558671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.567131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.567148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.567154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.575959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.575975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.575981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.585450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.585467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.585473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.593405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.593422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.593428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.603787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.603803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.603809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.612132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.612148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.612157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.621432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.621448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.621454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.630557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.630573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.630580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.638504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.638521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.638527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.647921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.647937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.647943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.656180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.656197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.656204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.665651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.665667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.665673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.673876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.673892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.673898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.683174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.683191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.683197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.691778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.691797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.691804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.700689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.700705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.700712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.045 [2024-11-19 09:48:43.709090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.045 [2024-11-19 09:48:43.709106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.045 [2024-11-19 09:48:43.709113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.046 [2024-11-19 09:48:43.718066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.046 [2024-11-19 09:48:43.718082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.046 [2024-11-19 09:48:43.718088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.046 [2024-11-19 09:48:43.727457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.046 [2024-11-19 09:48:43.727473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.046 [2024-11-19 09:48:43.727480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.046 [2024-11-19 09:48:43.735679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.046 [2024-11-19 09:48:43.735695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.046 [2024-11-19 09:48:43.735702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.046 [2024-11-19 09:48:43.744835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.046 [2024-11-19 09:48:43.744852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.046 [2024-11-19 09:48:43.744858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.046 [2024-11-19 09:48:43.753507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.046 [2024-11-19 09:48:43.753524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.046 [2024-11-19 09:48:43.753530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.046 [2024-11-19 09:48:43.762960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.046 [2024-11-19 09:48:43.762977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.046 [2024-11-19 09:48:43.762983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.046 [2024-11-19 09:48:43.771040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.046 [2024-11-19 09:48:43.771057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.046 [2024-11-19 09:48:43.771063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.046 [2024-11-19 09:48:43.779959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.046 [2024-11-19 09:48:43.779975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.046 [2024-11-19 09:48:43.779981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.308 [2024-11-19 09:48:43.789318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.308 [2024-11-19 09:48:43.789335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.308 [2024-11-19 09:48:43.789341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.308 [2024-11-19 09:48:43.797312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.308 [2024-11-19 09:48:43.797329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.308 [2024-11-19 09:48:43.797335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.308 [2024-11-19 09:48:43.806594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.308 [2024-11-19 09:48:43.806611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.308 [2024-11-19 09:48:43.806617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.308 [2024-11-19 09:48:43.815424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.308 [2024-11-19 09:48:43.815440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.308 [2024-11-19 09:48:43.815446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.308 [2024-11-19 09:48:43.823829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.308 [2024-11-19 09:48:43.823845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.308 [2024-11-19 09:48:43.823851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.308 [2024-11-19 09:48:43.831842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.308 [2024-11-19 09:48:43.831859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.308 [2024-11-19 09:48:43.831865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.308 [2024-11-19 09:48:43.841610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.308 [2024-11-19 09:48:43.841626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.308 [2024-11-19 09:48:43.841635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.308 [2024-11-19 09:48:43.850986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:43.851002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:43.851008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:43.860337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:43.860353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:43.860359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:43.868679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:43.868696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:43.868702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:43.877485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:43.877502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:43.877508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:43.886660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:43.886676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:43.886682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:43.895195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:43.895211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:43.895218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:43.903037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:43.903054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:43.903060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:43.912339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:43.912355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:43.912362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:43.921540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:43.921556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:43.921562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:43.929870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:43.929886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:43.929892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:43.939284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:43.939300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:43.939306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:43.948067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:43.948083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:43.948089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:43.956039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:43.956055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:43.956061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:43.965875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:43.965892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:43.965898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:43.975501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:43.975517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:43.975524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:43.983723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:43.983739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:43.983746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:43.994198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:43.994214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:43.994224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:44.003316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:44.003333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:44.003339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:44.012610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:44.012626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:44.012632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:44.021133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:44.021150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:44.021157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:44.030853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:44.030870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:44.030877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:44.038732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:44.038749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:44.038755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.309 [2024-11-19 09:48:44.048446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.309 [2024-11-19 09:48:44.048462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.309 [2024-11-19 09:48:44.048469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.571 [2024-11-19 09:48:44.056918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.571 [2024-11-19 09:48:44.056935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.571 [2024-11-19 09:48:44.056942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.571 [2024-11-19 09:48:44.065101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.571 [2024-11-19 09:48:44.065118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.571 [2024-11-19 09:48:44.065124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.571 [2024-11-19 09:48:44.074293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.571 [2024-11-19 09:48:44.074313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.571 [2024-11-19 09:48:44.074319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.571 [2024-11-19 09:48:44.083230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.571 [2024-11-19 09:48:44.083247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.571 [2024-11-19 09:48:44.083253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.571 [2024-11-19 09:48:44.091101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.571 [2024-11-19 09:48:44.091117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.571 [2024-11-19 09:48:44.091123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.571 [2024-11-19 09:48:44.100447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.571 [2024-11-19 09:48:44.100463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.571 [2024-11-19 09:48:44.100469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.571 [2024-11-19 09:48:44.110812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.571 [2024-11-19 09:48:44.110828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.571 [2024-11-19 09:48:44.110834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.571 [2024-11-19 09:48:44.119457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.571 [2024-11-19 09:48:44.119474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.571 [2024-11-19 09:48:44.119480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.572 [2024-11-19 09:48:44.127716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.572 [2024-11-19 09:48:44.127732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.572 [2024-11-19 09:48:44.127738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.572 [2024-11-19 09:48:44.137022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.572 [2024-11-19 09:48:44.137038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.572 [2024-11-19 09:48:44.137044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.572 [2024-11-19 09:48:44.145319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.572 [2024-11-19 09:48:44.145335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.572 [2024-11-19 09:48:44.145342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.572 [2024-11-19 09:48:44.153903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.572 [2024-11-19 09:48:44.153919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.572 [2024-11-19 09:48:44.153925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.572 [2024-11-19 09:48:44.162532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.572 [2024-11-19 09:48:44.162549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.572 [2024-11-19 09:48:44.162555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.572 [2024-11-19 09:48:44.171688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.572 [2024-11-19 09:48:44.171705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.572 [2024-11-19 09:48:44.171711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.572 28133.00 IOPS, 109.89 MiB/s [2024-11-19T08:48:44.320Z] [2024-11-19 09:48:44.180225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13c35c0) 00:30:57.572 [2024-11-19 09:48:44.180242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.572 [2024-11-19 09:48:44.180248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.572 00:30:57.572 Latency(us) 00:30:57.572 [2024-11-19T08:48:44.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.572 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:57.572 nvme0n1 : 2.00 28150.85 109.96 0.00 0.00 4541.41 1979.73 16274.77 00:30:57.572 [2024-11-19T08:48:44.320Z] =================================================================================================================== 00:30:57.572 [2024-11-19T08:48:44.320Z] Total : 28150.85 109.96 0.00 0.00 4541.41 1979.73 16274.77 00:30:57.572 { 00:30:57.572 "results": [ 00:30:57.572 { 00:30:57.572 "job": "nvme0n1", 00:30:57.572 "core_mask": "0x2", 00:30:57.572 "workload": "randread", 00:30:57.572 "status": "finished", 00:30:57.572 "queue_depth": 128, 00:30:57.572 "io_size": 4096, 00:30:57.572 "runtime": 2.003847, 00:30:57.572 "iops": 28150.851836492508, 00:30:57.572 "mibps": 109.96426498629886, 00:30:57.572 "io_failed": 0, 00:30:57.572 "io_timeout": 0, 00:30:57.572 "avg_latency_us": 4541.414404538203, 00:30:57.572 "min_latency_us": 1979.7333333333333, 00:30:57.572 "max_latency_us": 16274.773333333333 00:30:57.572 } 00:30:57.572 ], 00:30:57.572 "core_count": 1 00:30:57.572 } 00:30:57.572 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:57.572 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:57.572 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:57.572 | .driver_specific 00:30:57.572 | .nvme_error 00:30:57.572 | .status_code 00:30:57.572 | .command_transient_transport_error' 00:30:57.572 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 221 > 0 )) 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 512987 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 512987 ']' 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 512987 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 512987 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 512987' 00:30:57.835 killing process with pid 512987 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 512987 00:30:57.835 Received shutdown signal, test time was about 2.000000 seconds 00:30:57.835 00:30:57.835 Latency(us) 00:30:57.835 [2024-11-19T08:48:44.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.835 [2024-11-19T08:48:44.583Z] =================================================================================================================== 00:30:57.835 [2024-11-19T08:48:44.583Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 512987 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=513677 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 513677 /var/tmp/bperf.sock 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 513677 ']' 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:57.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:57.835 09:48:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:58.096 [2024-11-19 09:48:44.616692] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:58.096 [2024-11-19 09:48:44.616746] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513677 ] 00:30:58.096 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:58.096 Zero copy mechanism will not be used. 00:30:58.096 [2024-11-19 09:48:44.702511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.096 [2024-11-19 09:48:44.731295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.038 09:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:59.038 09:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:59.038 09:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:59.038 09:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:59.038 09:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:59.039 09:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.039 09:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:59.039 09:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.039 09:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:59.039 09:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:59.299 nvme0n1 00:30:59.299 09:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:59.299 09:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.299 09:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:59.299 09:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.299 09:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:59.299 09:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:59.560 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:59.561 Zero copy mechanism will not be used. 00:30:59.561 Running I/O for 2 seconds... 00:30:59.561 [2024-11-19 09:48:46.132647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.132679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.132687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.137244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.137266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.137273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.141720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.141740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.141747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.146269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.146293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.146300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.150637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.150655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.150662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.157492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.157509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.157516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.162056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.162074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.162080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.169032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.169050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.169057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.178942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.178960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.178967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.187004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.187021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.187028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.193037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.193054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.193061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.201773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.201790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.201797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.205688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.205706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.205712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.209684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.209701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.209708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.214597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.214615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.214621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.222410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.222428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.222435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.231800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.231818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.231824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.237234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.237251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.237258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.244422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.244439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.244446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.255211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.255228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.255234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.266499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.266517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.266527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.278265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.278282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.278288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.287628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.287646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.287652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.292189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.292206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.292212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.561 [2024-11-19 09:48:46.296669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.561 [2024-11-19 09:48:46.296686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.561 [2024-11-19 09:48:46.296693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.823 [2024-11-19 09:48:46.306801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.823 [2024-11-19 09:48:46.306819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.823 [2024-11-19 09:48:46.306825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.823 [2024-11-19 09:48:46.311316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.823 [2024-11-19 09:48:46.311333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.823 [2024-11-19 09:48:46.311339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.823 [2024-11-19 09:48:46.322811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.823 [2024-11-19 09:48:46.322830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.823 [2024-11-19 09:48:46.322836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.823 [2024-11-19 09:48:46.327235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.823 [2024-11-19 09:48:46.327253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.823 [2024-11-19 09:48:46.327259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.823 [2024-11-19 09:48:46.335634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.823 [2024-11-19 09:48:46.335656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.823 [2024-11-19 09:48:46.335662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.823 [2024-11-19 09:48:46.346031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.823 [2024-11-19 09:48:46.346049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.823 [2024-11-19 09:48:46.346055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.823 [2024-11-19 09:48:46.353561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.823 [2024-11-19 09:48:46.353578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.823 [2024-11-19 09:48:46.353584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.823 [2024-11-19 09:48:46.365041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.823 [2024-11-19 09:48:46.365059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.823 [2024-11-19 09:48:46.365065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.823 [2024-11-19 09:48:46.369505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.823 [2024-11-19 09:48:46.369523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.823 [2024-11-19 09:48:46.369529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.823 [2024-11-19 09:48:46.373914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.823 [2024-11-19 09:48:46.373932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.823 [2024-11-19 09:48:46.373938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.823 [2024-11-19 09:48:46.378257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.823 [2024-11-19 09:48:46.378275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.378282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.388113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.388131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.388138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.394923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.394941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.394947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.402311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.402329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.402335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.411444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.411461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.411467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.422317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.422335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.422341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.433011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.433029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.433035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.440986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.441004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.441010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.446105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.446122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.446128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.450620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.450638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.450644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.455237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.455255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.455261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.464045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.464063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.464072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.471721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.471738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.471745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.481260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.481278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.481284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.491637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.491654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.491660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.502992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.503009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.503015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.510605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.510622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.510629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.520341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.520358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.520364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.530966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.530984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.530991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.536282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.536300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.536306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.544374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.544396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.544402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.553763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.553781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.553787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.558477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.558495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.558501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:59.824 [2024-11-19 09:48:46.564615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:30:59.824 [2024-11-19 09:48:46.564633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.824 [2024-11-19 09:48:46.564639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.574902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.574921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.574927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.583118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.583136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.583143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.593533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.593551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.593557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.603646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.603664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.603671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.615746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.615764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.615771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.627577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.627596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.627603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.640036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.640054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.640060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.651734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.651752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.651758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.661250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.661268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.661275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.671085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.671102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.671108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.673790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.673808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.673814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.678219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.678237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.678243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.686630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.686649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.686655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.693107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.693125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.693135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.700385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.700403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.700409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.712597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.712616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.712622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.723480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.723498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.723505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.735273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.735291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.735297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.745900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.745918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.745924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.750880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.750898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.750904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.755223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.755240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.755247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.759990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.760008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.760014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.765212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.765234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.765241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.775463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.775481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.775487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.783840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.783857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.087 [2024-11-19 09:48:46.783863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.087 [2024-11-19 09:48:46.788056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.087 [2024-11-19 09:48:46.788074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.088 [2024-11-19 09:48:46.788081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.088 [2024-11-19 09:48:46.794725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.088 [2024-11-19 09:48:46.794742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.088 [2024-11-19 09:48:46.794749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.088 [2024-11-19 09:48:46.803195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.088 [2024-11-19 09:48:46.803213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.088 [2024-11-19 09:48:46.803220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.088 [2024-11-19 09:48:46.808121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.088 [2024-11-19 09:48:46.808139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.088 [2024-11-19 09:48:46.808145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.088 [2024-11-19 09:48:46.815960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.088 [2024-11-19 09:48:46.815978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.088 [2024-11-19 09:48:46.815984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.088 [2024-11-19 09:48:46.823121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.088 [2024-11-19 09:48:46.823139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.088 [2024-11-19 09:48:46.823145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.088 [2024-11-19 09:48:46.828024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.088 [2024-11-19 09:48:46.828041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.088 [2024-11-19 09:48:46.828047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.349 [2024-11-19 09:48:46.836141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.349 [2024-11-19 09:48:46.836164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.836171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.841494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.841513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.841519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.846559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.846577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.846583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.851972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.851989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.851995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.859889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.859907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.859913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.865105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.865122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.865128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.873022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.873039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.873045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.877921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.877938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.877948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.885385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.885403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.885410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.890167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.890185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.890192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.895495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.895512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.895519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.899854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.899872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.899879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.904351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.904368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.904374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.908903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.908921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.908927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.916189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.916207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.916214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.926051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.926068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.926075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.937356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.937373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.937379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.949147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.949170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.949177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.961428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.961446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.961453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.972663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.972681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.972687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.983681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.983699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.983705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:46.995409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:46.995427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:46.995434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:47.007620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:47.007638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:47.007644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:47.013771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:47.013789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:47.013796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:47.021245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:47.021263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:47.021273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:47.030395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:47.030413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.350 [2024-11-19 09:48:47.030419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.350 [2024-11-19 09:48:47.033408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.350 [2024-11-19 09:48:47.033424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.351 [2024-11-19 09:48:47.033431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.351 [2024-11-19 09:48:47.036877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.351 [2024-11-19 09:48:47.036895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.351 [2024-11-19 09:48:47.036901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.351 [2024-11-19 09:48:47.042098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.351 [2024-11-19 09:48:47.042116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.351 [2024-11-19 09:48:47.042122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.351 [2024-11-19 09:48:47.049270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.351 [2024-11-19 09:48:47.049288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.351 [2024-11-19 09:48:47.049294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.351 [2024-11-19 09:48:47.055223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.351 [2024-11-19 09:48:47.055241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.351 [2024-11-19 09:48:47.055248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.351 [2024-11-19 09:48:47.065445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.351 [2024-11-19 09:48:47.065462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.351 [2024-11-19 09:48:47.065468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.351 [2024-11-19 09:48:47.074018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.351 [2024-11-19 09:48:47.074036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.351 [2024-11-19 09:48:47.074042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.351 [2024-11-19 09:48:47.083849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.351 [2024-11-19 09:48:47.083874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.351 [2024-11-19 09:48:47.083880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.094733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.094751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.094757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.104205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.104223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.104229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.115780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.115798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.115804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.123272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.123290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.123296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.613 3972.00 IOPS, 496.50 MiB/s [2024-11-19T08:48:47.361Z] [2024-11-19 09:48:47.134863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.134881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.134888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.143475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.143492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.143499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.152072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.152089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.152095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.164712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.164730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.164736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.176902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.176920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.176926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.189328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.189345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.189351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.201123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.201140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.201147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.209997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.210015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.210021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.221139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.221157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.221167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.229736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.229753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.229759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.235525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.235543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.235549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.239862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.239879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.239886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.244169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.244186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.244196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.251139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.251156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.251167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.257866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.257884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.257890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.262392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.262409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.262416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.273234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.273252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.273258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.280155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.280180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.280186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.286685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.286702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.286708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.290923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.290940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.613 [2024-11-19 09:48:47.290946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.613 [2024-11-19 09:48:47.295321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.613 [2024-11-19 09:48:47.295338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.614 [2024-11-19 09:48:47.295345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.614 [2024-11-19 09:48:47.303912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.614 [2024-11-19 09:48:47.303933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.614 [2024-11-19 09:48:47.303939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.614 [2024-11-19 09:48:47.308298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.614 [2024-11-19 09:48:47.308315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.614 [2024-11-19 09:48:47.308321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.614 [2024-11-19 09:48:47.314613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.614 [2024-11-19 09:48:47.314631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.614 [2024-11-19 09:48:47.314637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.614 [2024-11-19 09:48:47.319001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.614 [2024-11-19 09:48:47.319019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.614 [2024-11-19 09:48:47.319025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.614 [2024-11-19 09:48:47.323573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.614 [2024-11-19 09:48:47.323591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.614 [2024-11-19 09:48:47.323598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.614 [2024-11-19 09:48:47.328145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.614 [2024-11-19 09:48:47.328167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.614 [2024-11-19 09:48:47.328174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.614 [2024-11-19 09:48:47.334698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.614 [2024-11-19 09:48:47.334715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.614 [2024-11-19 09:48:47.334722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.614 [2024-11-19 09:48:47.339153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.614 [2024-11-19 09:48:47.339174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.614 [2024-11-19 09:48:47.339181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.614 [2024-11-19 09:48:47.346440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.614 [2024-11-19 09:48:47.346457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.614 [2024-11-19 09:48:47.346467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.614 [2024-11-19 09:48:47.355331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.614 [2024-11-19 09:48:47.355348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.614 [2024-11-19 09:48:47.355354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.876 [2024-11-19 09:48:47.359693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.876 [2024-11-19 09:48:47.359711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.876 [2024-11-19 09:48:47.359717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.876 [2024-11-19 09:48:47.370098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.876 [2024-11-19 09:48:47.370115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.876 [2024-11-19 09:48:47.370121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.876 [2024-11-19 09:48:47.378157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.876 [2024-11-19 09:48:47.378178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.378184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.389209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.389227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.389233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.394059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.394076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.394082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.398345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.398363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.398370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.405403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.405421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.405427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.416311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.416332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.416338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.423759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.423776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.423783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.431665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.431682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.431688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.440628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.440646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.440653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.445076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.445094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.445100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.449871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.449888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.449894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.453739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.453756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.453763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.458029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.458047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.458053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.462362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.462379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.462385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.472014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.472032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.472038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.478980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.478998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.479004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.483394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.483411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.483417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.488815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.488833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.488839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.493221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.493238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.493244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.497731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.497749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.497755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.502121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.502138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.502145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.508566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.508583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.508589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.517821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.517839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.517848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.522461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.522478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.522484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.531976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.531993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.531999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.538809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.538827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.538833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.877 [2024-11-19 09:48:47.549276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.877 [2024-11-19 09:48:47.549294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.877 [2024-11-19 09:48:47.549300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.878 [2024-11-19 09:48:47.557563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.878 [2024-11-19 09:48:47.557580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.878 [2024-11-19 09:48:47.557586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.878 [2024-11-19 09:48:47.562338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.878 [2024-11-19 09:48:47.562355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.878 [2024-11-19 09:48:47.562361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.878 [2024-11-19 09:48:47.569112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.878 [2024-11-19 09:48:47.569129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.878 [2024-11-19 09:48:47.569136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.878 [2024-11-19 09:48:47.579480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.878 [2024-11-19 09:48:47.579498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.878 [2024-11-19 09:48:47.579504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.878 [2024-11-19 09:48:47.583908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.878 [2024-11-19 09:48:47.583929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.878 [2024-11-19 09:48:47.583935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.878 [2024-11-19 09:48:47.588424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.878 [2024-11-19 09:48:47.588441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.878 [2024-11-19 09:48:47.588448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.878 [2024-11-19 09:48:47.596070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.878 [2024-11-19 09:48:47.596088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.878 [2024-11-19 09:48:47.596094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:00.878 [2024-11-19 09:48:47.600366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.878 [2024-11-19 09:48:47.600382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.878 [2024-11-19 09:48:47.600389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:00.878 [2024-11-19 09:48:47.605856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.878 [2024-11-19 09:48:47.605874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.878 [2024-11-19 09:48:47.605880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:00.878 [2024-11-19 09:48:47.610311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.878 [2024-11-19 09:48:47.610327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.878 [2024-11-19 09:48:47.610334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:00.878 [2024-11-19 09:48:47.614319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:00.878 [2024-11-19 09:48:47.614336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.878 [2024-11-19 09:48:47.614342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.624259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.624278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.624284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.633346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.633364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.633370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.644609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.644626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.644633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.653562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.653580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.653586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.657995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.658013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.658021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.663542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.663560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.663567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.668085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.668103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.668109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.673688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.673705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.673713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.678394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.678412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.678418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.684149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.684171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.684177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.690411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.690428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.690439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.696389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.696406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.696412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.705818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.705836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.705842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.711292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.711309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.711315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.722012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.722030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.722036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.727966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.727984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.727991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.732415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.732433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.732439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.744188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.744206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.744212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.748382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.140 [2024-11-19 09:48:47.748400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.140 [2024-11-19 09:48:47.748406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.140 [2024-11-19 09:48:47.752994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.753014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.753020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.760051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.760069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.760075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.768149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.768171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.768177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.774791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.774808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.774814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.779103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.779121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.779127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.784622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.784640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.784646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.793746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.793764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.793770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.799111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.799128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.799134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.803314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.803332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.803338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.807648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.807666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.807672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.811957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.811975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.811982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.816294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.816312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.816318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.827644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.827662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.827668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.836261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.836279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.836285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.840739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.840757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.840763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.845347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.845364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.845371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.850014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.850031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.850037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.857023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.857040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.857050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.862032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.862050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.862057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.868102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.868120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.868127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.141 [2024-11-19 09:48:47.874328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.141 [2024-11-19 09:48:47.874345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.141 [2024-11-19 09:48:47.874351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.404 [2024-11-19 09:48:47.884264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.404 [2024-11-19 09:48:47.884282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.404 [2024-11-19 09:48:47.884289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.404 [2024-11-19 09:48:47.893281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.404 [2024-11-19 09:48:47.893299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.404 [2024-11-19 09:48:47.893306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.404 [2024-11-19 09:48:47.901987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.404 [2024-11-19 09:48:47.902005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.404 [2024-11-19 09:48:47.902011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.404 [2024-11-19 09:48:47.906345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.404 [2024-11-19 09:48:47.906363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.404 [2024-11-19 09:48:47.906369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.404 [2024-11-19 09:48:47.910906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.404 [2024-11-19 09:48:47.910923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.404 [2024-11-19 09:48:47.910930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.404 [2024-11-19 09:48:47.915300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.404 [2024-11-19 09:48:47.915317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:47.915324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:47.922728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:47.922746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:47.922752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:47.928119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:47.928136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:47.928142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:47.932401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:47.932418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:47.932424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:47.941385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:47.941403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:47.941409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:47.948558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:47.948576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:47.948582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:47.957572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:47.957589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:47.957596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:47.962264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:47.962280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:47.962287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:47.970681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:47.970699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:47.970713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:47.975105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:47.975122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:47.975128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:47.982635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:47.982652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:47.982658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:47.994177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:47.994194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:47.994200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:48.001706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:48.001724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:48.001730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:48.010841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:48.010858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:48.010865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:48.015213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:48.015230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:48.015237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:48.021645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:48.021663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:48.021669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:48.026058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:48.026076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:48.026082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:48.030846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:48.030867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:48.030873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:48.035297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:48.035314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:48.035321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:48.045846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:48.045864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:48.045870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:48.053353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:48.053371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:48.053377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:48.064552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:48.064570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:48.064576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:48.069828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:48.069846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:48.069852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:48.076397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:48.076415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:48.076421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:48.079859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.405 [2024-11-19 09:48:48.079877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.405 [2024-11-19 09:48:48.079883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.405 [2024-11-19 09:48:48.084165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.406 [2024-11-19 09:48:48.084183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.406 [2024-11-19 09:48:48.084190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.406 [2024-11-19 09:48:48.088235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.406 [2024-11-19 09:48:48.088253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.406 [2024-11-19 09:48:48.088259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.406 [2024-11-19 09:48:48.094486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.406 [2024-11-19 09:48:48.094503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.406 [2024-11-19 09:48:48.094510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.406 [2024-11-19 09:48:48.102950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.406 [2024-11-19 09:48:48.102968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.406 [2024-11-19 09:48:48.102974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.406 [2024-11-19 09:48:48.109503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.406 [2024-11-19 09:48:48.109521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.406 [2024-11-19 09:48:48.109527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.406 [2024-11-19 09:48:48.116103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.406 [2024-11-19 09:48:48.116122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.406 [2024-11-19 09:48:48.116129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.406 [2024-11-19 09:48:48.124114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb69a10) 00:31:01.406 [2024-11-19 09:48:48.124132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.406 [2024-11-19 09:48:48.124139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.406 4284.50 IOPS, 535.56 MiB/s 00:31:01.406 Latency(us) 00:31:01.406 [2024-11-19T08:48:48.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.406 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:01.406 nvme0n1 : 2.00 4287.57 535.95 0.00 0.00 3728.90 761.17 13107.20 00:31:01.406 [2024-11-19T08:48:48.154Z] =================================================================================================================== 00:31:01.406 [2024-11-19T08:48:48.154Z] Total : 4287.57 535.95 0.00 0.00 3728.90 761.17 13107.20 00:31:01.406 { 00:31:01.406 "results": [ 00:31:01.406 { 00:31:01.406 "job": "nvme0n1", 00:31:01.406 "core_mask": "0x2", 00:31:01.406 "workload": "randread", 00:31:01.406 "status": "finished", 00:31:01.406 "queue_depth": 16, 00:31:01.406 "io_size": 131072, 00:31:01.406 "runtime": 2.0023, 00:31:01.406 "iops": 4287.5692953103935, 00:31:01.406 "mibps": 535.9461619137992, 00:31:01.406 "io_failed": 0, 00:31:01.406 "io_timeout": 0, 00:31:01.406 "avg_latency_us": 3728.897490972627, 00:31:01.406 "min_latency_us": 761.1733333333333, 00:31:01.406 "max_latency_us": 13107.2 00:31:01.406 } 00:31:01.406 ], 00:31:01.406 "core_count": 1 00:31:01.406 } 00:31:01.668 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:01.668 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:01.668 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:01.668 | .driver_specific 00:31:01.668 | .nvme_error 00:31:01.668 | .status_code 00:31:01.668 | .command_transient_transport_error' 00:31:01.668 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:01.668 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 277 > 0 )) 00:31:01.668 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 513677 00:31:01.668 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 513677 ']' 00:31:01.668 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 513677 00:31:01.668 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:01.668 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:01.668 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 513677 00:31:01.668 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:01.668 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:01.668 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 513677' 00:31:01.668 killing process with pid 513677 00:31:01.668 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 513677 00:31:01.668 Received shutdown signal, test time was about 2.000000 seconds 00:31:01.668 00:31:01.668 Latency(us) 00:31:01.668 [2024-11-19T08:48:48.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.668 [2024-11-19T08:48:48.416Z] =================================================================================================================== 00:31:01.668 [2024-11-19T08:48:48.416Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:01.668 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 513677 00:31:01.929 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:31:01.929 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:01.929 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:01.929 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:01.929 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:01.929 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=514524 00:31:01.929 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 514524 /var/tmp/bperf.sock 00:31:01.929 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 514524 ']' 00:31:01.929 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:31:01.929 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:01.929 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.929 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:01.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:01.929 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.929 09:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:01.929 [2024-11-19 09:48:48.545333] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:31:01.929 [2024-11-19 09:48:48.545391] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid514524 ] 00:31:01.929 [2024-11-19 09:48:48.627190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.929 [2024-11-19 09:48:48.656810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.875 09:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.875 09:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:02.875 09:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:02.875 09:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:02.875 09:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:02.875 09:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.875 09:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:02.875 09:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.875 09:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:02.875 09:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:03.135 nvme0n1 00:31:03.135 09:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:03.135 09:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.135 09:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:03.135 09:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.135 09:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:03.135 09:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:03.397 Running I/O for 2 seconds... 00:31:03.397 [2024-11-19 09:48:49.910168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f0788 00:31:03.397 [2024-11-19 09:48:49.911147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:49.911176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:49.919018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166ed4e8 00:31:03.397 [2024-11-19 09:48:49.919972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:49.919991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:49.927563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e3498 00:31:03.397 [2024-11-19 09:48:49.928500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:49.928517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:49.936049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f8a50 00:31:03.397 [2024-11-19 09:48:49.936994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:49.937011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:49.944539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f57b0 00:31:03.397 [2024-11-19 09:48:49.945574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:49.945590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:49.953129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f2510 00:31:03.397 [2024-11-19 09:48:49.954024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:49.954041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:49.961581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166ef270 00:31:03.397 [2024-11-19 09:48:49.962518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:49.962535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:49.970049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166ebfd0 00:31:03.397 [2024-11-19 09:48:49.970988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:49.971005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:49.978506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f9f68 00:31:03.397 [2024-11-19 09:48:49.979444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:49.979461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:49.986980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f6cc8 00:31:03.397 [2024-11-19 09:48:49.987924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:49.987940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:49.995421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f3a28 00:31:03.397 [2024-11-19 09:48:49.996338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:49.996358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:50.003842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f0788 00:31:03.397 [2024-11-19 09:48:50.004797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:50.004813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:50.012762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166ed4e8 00:31:03.397 [2024-11-19 09:48:50.013712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:50.013730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:50.021387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e3498 00:31:03.397 [2024-11-19 09:48:50.022312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:50.022329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:50.029856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f8a50 00:31:03.397 [2024-11-19 09:48:50.030807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:50.030823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:50.038292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f57b0 00:31:03.397 [2024-11-19 09:48:50.039211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:50.039227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:50.046738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f2510 00:31:03.397 [2024-11-19 09:48:50.047655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:50.047670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:50.055185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166ef270 00:31:03.397 [2024-11-19 09:48:50.056108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.397 [2024-11-19 09:48:50.056124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.397 [2024-11-19 09:48:50.063605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166ebfd0 00:31:03.398 [2024-11-19 09:48:50.064539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.398 [2024-11-19 09:48:50.064555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.398 [2024-11-19 09:48:50.072047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f9f68 00:31:03.398 [2024-11-19 09:48:50.072977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.398 [2024-11-19 09:48:50.072993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.398 [2024-11-19 09:48:50.080491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f6cc8 00:31:03.398 [2024-11-19 09:48:50.081372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.398 [2024-11-19 09:48:50.081388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.398 [2024-11-19 09:48:50.088928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f3a28 00:31:03.398 [2024-11-19 09:48:50.089868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.398 [2024-11-19 09:48:50.089884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.398 [2024-11-19 09:48:50.097351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f0788 00:31:03.398 [2024-11-19 09:48:50.098240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.398 [2024-11-19 09:48:50.098256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.398 [2024-11-19 09:48:50.105780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166ed4e8 00:31:03.398 [2024-11-19 09:48:50.106720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.398 [2024-11-19 09:48:50.106736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.398 [2024-11-19 09:48:50.114214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e3498 00:31:03.398 [2024-11-19 09:48:50.115147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.398 [2024-11-19 09:48:50.115166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.398 [2024-11-19 09:48:50.122636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f8a50 00:31:03.398 [2024-11-19 09:48:50.123571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.398 [2024-11-19 09:48:50.123587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.398 [2024-11-19 09:48:50.131059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f57b0 00:31:03.398 [2024-11-19 09:48:50.131999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.398 [2024-11-19 09:48:50.132014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.398 [2024-11-19 09:48:50.139482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f2510 00:31:03.398 [2024-11-19 09:48:50.140418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.398 [2024-11-19 09:48:50.140435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.660 [2024-11-19 09:48:50.147890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166ef270 00:31:03.660 [2024-11-19 09:48:50.148829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.660 [2024-11-19 09:48:50.148845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.660 [2024-11-19 09:48:50.156334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166ebfd0 00:31:03.660 [2024-11-19 09:48:50.157224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.660 [2024-11-19 09:48:50.157239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.660 [2024-11-19 09:48:50.164768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f9f68 00:31:03.660 [2024-11-19 09:48:50.165656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.660 [2024-11-19 09:48:50.165671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.660 [2024-11-19 09:48:50.173194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f6cc8 00:31:03.660 [2024-11-19 09:48:50.174126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.174141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.181623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f3a28 00:31:03.661 [2024-11-19 09:48:50.182517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.182532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.190039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f0788 00:31:03.661 [2024-11-19 09:48:50.190969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.190985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.198459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166ed4e8 00:31:03.661 [2024-11-19 09:48:50.199390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.199406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.206893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e3498 00:31:03.661 [2024-11-19 09:48:50.207815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.207830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.215335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f8a50 00:31:03.661 [2024-11-19 09:48:50.216241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.216260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.223764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f57b0 00:31:03.661 [2024-11-19 09:48:50.224691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.224707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.232199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f2510 00:31:03.661 [2024-11-19 09:48:50.233126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.233142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.240607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166ef270 00:31:03.661 [2024-11-19 09:48:50.241551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.241567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.249029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166ebfd0 00:31:03.661 [2024-11-19 09:48:50.249970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.249986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.256885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e99d8 00:31:03.661 [2024-11-19 09:48:50.257791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.257807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.265771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e9e10 00:31:03.661 [2024-11-19 09:48:50.266814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.266829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.273658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166ef270 00:31:03.661 [2024-11-19 09:48:50.274340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.274356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.282018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166fac10 00:31:03.661 [2024-11-19 09:48:50.282735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.282750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.290462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f3a28 00:31:03.661 [2024-11-19 09:48:50.291182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.291198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.298931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e27f0 00:31:03.661 [2024-11-19 09:48:50.299610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.299626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.307383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f2948 00:31:03.661 [2024-11-19 09:48:50.307948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.307964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.315821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f0788 00:31:03.661 [2024-11-19 09:48:50.316374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.316390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.324273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f6020 00:31:03.661 [2024-11-19 09:48:50.324852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.324868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.332705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f6020 00:31:03.661 [2024-11-19 09:48:50.333400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.333415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.341137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f6020 00:31:03.661 [2024-11-19 09:48:50.341847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.341863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.350675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166f6020 00:31:03.661 [2024-11-19 09:48:50.351816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.351831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.358185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166eee38 00:31:03.661 [2024-11-19 09:48:50.358668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.358684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.367292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.661 [2024-11-19 09:48:50.367566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.367582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.375989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.661 [2024-11-19 09:48:50.376260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.376276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.384684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.661 [2024-11-19 09:48:50.384992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.385008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.393455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.661 [2024-11-19 09:48:50.393746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.393762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.661 [2024-11-19 09:48:50.402266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.661 [2024-11-19 09:48:50.402511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.661 [2024-11-19 09:48:50.402526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.922 [2024-11-19 09:48:50.411018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.411257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.411272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.419775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.420071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.420087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.428468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.428739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.428760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.437164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.437450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.437470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.445861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.446179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.446196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.454630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.454900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.454916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.463379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.463660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.463676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.472103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.472391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.472407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.480776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.481085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.481100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.489553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.489829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.489845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.498289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.498595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.498611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.507050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.507267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.507283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.515815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.516080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.516098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.524536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.524889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.524905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.533257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.533558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.533573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.541916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.542246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.542261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.550650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.550934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.550950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.559408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.559716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.559732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.568133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.568430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.568446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.576882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.577182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.577198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.585605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.585832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.585846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.594369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.594644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.594660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.603183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.603419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.603434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.611873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.612145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.612165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.620548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.620833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.620849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.629319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.629608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.923 [2024-11-19 09:48:50.629624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.923 [2024-11-19 09:48:50.638053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.923 [2024-11-19 09:48:50.638350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.924 [2024-11-19 09:48:50.638366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.924 [2024-11-19 09:48:50.646772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.924 [2024-11-19 09:48:50.646989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.924 [2024-11-19 09:48:50.647004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.924 [2024-11-19 09:48:50.655475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.924 [2024-11-19 09:48:50.655721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.924 [2024-11-19 09:48:50.655736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.924 [2024-11-19 09:48:50.664176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:03.924 [2024-11-19 09:48:50.664434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:03.924 [2024-11-19 09:48:50.664449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.672900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.673187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.673202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.681653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.682005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.682020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.690330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.690542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.690557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.699104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.699356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.699370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.707798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.708083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.708098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.716569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.716836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.716851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.725244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.725526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.725541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.733985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.734274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.734290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.742813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.743084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.743102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.751566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.751831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.751854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.760311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.760576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.760598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.769073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.769345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.769360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.777788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.778071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.778087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.786520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.786799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.786815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.795226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.795531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.795546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.803939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.804249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.804265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.812643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.812916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.812931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.821319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.821600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.821614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.830085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.830349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.830364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.838787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.839122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.839137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.847536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.186 [2024-11-19 09:48:50.847803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.186 [2024-11-19 09:48:50.847819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.186 [2024-11-19 09:48:50.856236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.187 [2024-11-19 09:48:50.856380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.187 [2024-11-19 09:48:50.856395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.187 [2024-11-19 09:48:50.864936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.187 [2024-11-19 09:48:50.865237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.187 [2024-11-19 09:48:50.865258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.187 [2024-11-19 09:48:50.873680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.187 [2024-11-19 09:48:50.873963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.187 [2024-11-19 09:48:50.873978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.187 [2024-11-19 09:48:50.882408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.187 [2024-11-19 09:48:50.882629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.187 [2024-11-19 09:48:50.882644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.187 [2024-11-19 09:48:50.891066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.187 [2024-11-19 09:48:50.891368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.187 [2024-11-19 09:48:50.891384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.187 [2024-11-19 09:48:50.899787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.187 [2024-11-19 09:48:50.900002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.187 [2024-11-19 09:48:50.900017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.187 29636.00 IOPS, 115.77 MiB/s [2024-11-19T08:48:50.935Z] [2024-11-19 09:48:50.908578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.187 [2024-11-19 09:48:50.908846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.187 [2024-11-19 09:48:50.908861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.187 [2024-11-19 09:48:50.917240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.187 [2024-11-19 09:48:50.917537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.187 [2024-11-19 09:48:50.917552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.187 [2024-11-19 09:48:50.925912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.187 [2024-11-19 09:48:50.926188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.187 [2024-11-19 09:48:50.926203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.450 [2024-11-19 09:48:50.934590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.450 [2024-11-19 09:48:50.934912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.450 [2024-11-19 09:48:50.934928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.450 [2024-11-19 09:48:50.943301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.450 [2024-11-19 09:48:50.943567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.450 [2024-11-19 09:48:50.943587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.450 [2024-11-19 09:48:50.951963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.450 [2024-11-19 09:48:50.952231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.450 [2024-11-19 09:48:50.952246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.450 [2024-11-19 09:48:50.960766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.450 [2024-11-19 09:48:50.961050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:50.961066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:50.969560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:50.969858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:50.969873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:50.978284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:50.978576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:50.978592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:50.986992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:50.987253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:50.987268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:50.995741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:50.996053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:50.996068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.004500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.004790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:51.004806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.013183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.013463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:51.013479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.022061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.022202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:51.022218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.030846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.031136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:51.031151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.039588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.039928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:51.039944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.048245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.048533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:51.048549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.057072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.057321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:51.057336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.065754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.066056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:51.066071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.074603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.074890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:51.074906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.083305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.083546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:51.083561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.092025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.092252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:51.092267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.100776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.101066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:51.101081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.109550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.109780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:51.109795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.118331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.118608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:51.118624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.127008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.127293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:51.127308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.135748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.135879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:51.135894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.144461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.144735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.451 [2024-11-19 09:48:51.144751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.451 [2024-11-19 09:48:51.153231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.451 [2024-11-19 09:48:51.153376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.452 [2024-11-19 09:48:51.153393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.452 [2024-11-19 09:48:51.161944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.452 [2024-11-19 09:48:51.162217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.452 [2024-11-19 09:48:51.162232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.452 [2024-11-19 09:48:51.170652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.452 [2024-11-19 09:48:51.170932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.452 [2024-11-19 09:48:51.170948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.452 [2024-11-19 09:48:51.179394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.452 [2024-11-19 09:48:51.179621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.452 [2024-11-19 09:48:51.179636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.452 [2024-11-19 09:48:51.188107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.452 [2024-11-19 09:48:51.188465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.452 [2024-11-19 09:48:51.188480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.713 [2024-11-19 09:48:51.196834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.713 [2024-11-19 09:48:51.197111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.713 [2024-11-19 09:48:51.197129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.713 [2024-11-19 09:48:51.205494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.713 [2024-11-19 09:48:51.205691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.713 [2024-11-19 09:48:51.205706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.713 [2024-11-19 09:48:51.214294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.713 [2024-11-19 09:48:51.214551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.713 [2024-11-19 09:48:51.214566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.713 [2024-11-19 09:48:51.222950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.713 [2024-11-19 09:48:51.223224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.713 [2024-11-19 09:48:51.223239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.713 [2024-11-19 09:48:51.231652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.713 [2024-11-19 09:48:51.231975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.713 [2024-11-19 09:48:51.231990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.713 [2024-11-19 09:48:51.240407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.713 [2024-11-19 09:48:51.240690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.713 [2024-11-19 09:48:51.240706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.713 [2024-11-19 09:48:51.249118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.713 [2024-11-19 09:48:51.249400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.713 [2024-11-19 09:48:51.249416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.713 [2024-11-19 09:48:51.257819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.713 [2024-11-19 09:48:51.258115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.713 [2024-11-19 09:48:51.258131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.713 [2024-11-19 09:48:51.266515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.713 [2024-11-19 09:48:51.266781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.713 [2024-11-19 09:48:51.266796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.275227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.275499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.275515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.283896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.284182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.284197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.292661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.292948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.292964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.301356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.301647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.301662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.310096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.310372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.310388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.318828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.319115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.319131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.327563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.327843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.327858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.336257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.336488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.336503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.345009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.345141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.345156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.353684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.354033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.354049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.362362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.362606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.362621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.371100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.371400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.371416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.379845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.380135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.380151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.388513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.388850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.388865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.397226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.397510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.397526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.405932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.406271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.406287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.414649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.414951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.414966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.423369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.423697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.423715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.432034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.432298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.432313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.440702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.441053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.441068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.714 [2024-11-19 09:48:51.449449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.714 [2024-11-19 09:48:51.449737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.714 [2024-11-19 09:48:51.449753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.976 [2024-11-19 09:48:51.458097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.976 [2024-11-19 09:48:51.458354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.976 [2024-11-19 09:48:51.458370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.976 [2024-11-19 09:48:51.466825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.976 [2024-11-19 09:48:51.466975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.976 [2024-11-19 09:48:51.466990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.976 [2024-11-19 09:48:51.475527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.976 [2024-11-19 09:48:51.475790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.976 [2024-11-19 09:48:51.475806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.976 [2024-11-19 09:48:51.484333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.976 [2024-11-19 09:48:51.484607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.976 [2024-11-19 09:48:51.484623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.976 [2024-11-19 09:48:51.493047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.976 [2024-11-19 09:48:51.493191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.976 [2024-11-19 09:48:51.493206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.976 [2024-11-19 09:48:51.501699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.501989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.502004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.510454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.510738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.510754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.519184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.519434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.519449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.527927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.528059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.528074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.536627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.536913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.536928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.545390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.545609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.545623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.554128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.554504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.554520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.562809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.563126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.563141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.571610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.571903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.571918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.580272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.580561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.580576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.589046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.589295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.589310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.597695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.597979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.597995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.606425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.606714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.606730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.615134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.615525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.615540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.623841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.624136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.624151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.632566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.632840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.632857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.641291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.641543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.641559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.650057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.650344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.650363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.658813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.659106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.659121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.667623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.667884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.667899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.676413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.676737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.676752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.685170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.685448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.685463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.693882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.694183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.977 [2024-11-19 09:48:51.694198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.977 [2024-11-19 09:48:51.702619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.977 [2024-11-19 09:48:51.702851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.978 [2024-11-19 09:48:51.702866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.978 [2024-11-19 09:48:51.711361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.978 [2024-11-19 09:48:51.711507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.978 [2024-11-19 09:48:51.711522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:04.978 [2024-11-19 09:48:51.720064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:04.978 [2024-11-19 09:48:51.720228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.978 [2024-11-19 09:48:51.720243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.240 [2024-11-19 09:48:51.728812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.240 [2024-11-19 09:48:51.729180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.240 [2024-11-19 09:48:51.729195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.240 [2024-11-19 09:48:51.737468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.240 [2024-11-19 09:48:51.737818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.240 [2024-11-19 09:48:51.737833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.240 [2024-11-19 09:48:51.746166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.240 [2024-11-19 09:48:51.746471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.240 [2024-11-19 09:48:51.746486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.240 [2024-11-19 09:48:51.754939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.240 [2024-11-19 09:48:51.755075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.240 [2024-11-19 09:48:51.755091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.240 [2024-11-19 09:48:51.763716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.240 [2024-11-19 09:48:51.764000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.240 [2024-11-19 09:48:51.764016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.240 [2024-11-19 09:48:51.772464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.240 [2024-11-19 09:48:51.772749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.240 [2024-11-19 09:48:51.772766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.240 [2024-11-19 09:48:51.781142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.240 [2024-11-19 09:48:51.781484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.240 [2024-11-19 09:48:51.781500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.240 [2024-11-19 09:48:51.789824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.240 [2024-11-19 09:48:51.790087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.240 [2024-11-19 09:48:51.790102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.241 [2024-11-19 09:48:51.798597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.241 [2024-11-19 09:48:51.798860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.241 [2024-11-19 09:48:51.798875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.241 [2024-11-19 09:48:51.807331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.241 [2024-11-19 09:48:51.807617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.241 [2024-11-19 09:48:51.807632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.241 [2024-11-19 09:48:51.815998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.241 [2024-11-19 09:48:51.816288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.241 [2024-11-19 09:48:51.816304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.241 [2024-11-19 09:48:51.824682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.241 [2024-11-19 09:48:51.824959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.241 [2024-11-19 09:48:51.824974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.241 [2024-11-19 09:48:51.833405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.241 [2024-11-19 09:48:51.833733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.241 [2024-11-19 09:48:51.833749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.241 [2024-11-19 09:48:51.842123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.241 [2024-11-19 09:48:51.842374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.241 [2024-11-19 09:48:51.842388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.241 [2024-11-19 09:48:51.850904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.241 [2024-11-19 09:48:51.851186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.241 [2024-11-19 09:48:51.851206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.241 [2024-11-19 09:48:51.859611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.241 [2024-11-19 09:48:51.859904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.241 [2024-11-19 09:48:51.859919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.241 [2024-11-19 09:48:51.868334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.241 [2024-11-19 09:48:51.868574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.241 [2024-11-19 09:48:51.868597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.241 [2024-11-19 09:48:51.877041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.241 [2024-11-19 09:48:51.877334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.241 [2024-11-19 09:48:51.877353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.241 [2024-11-19 09:48:51.885737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.241 [2024-11-19 09:48:51.886022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.241 [2024-11-19 09:48:51.886037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.241 [2024-11-19 09:48:51.894496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.241 [2024-11-19 09:48:51.894726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.241 [2024-11-19 09:48:51.894741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.241 [2024-11-19 09:48:51.903205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a520) with pdu=0x2000166e01f8 00:31:05.241 [2024-11-19 09:48:51.903529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.241 [2024-11-19 09:48:51.903545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:05.241 29464.50 IOPS, 115.10 MiB/s 00:31:05.241 Latency(us) 00:31:05.241 [2024-11-19T08:48:51.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.241 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:05.241 nvme0n1 : 2.00 29464.78 115.10 0.00 0.00 4337.20 2034.35 9502.72 00:31:05.241 [2024-11-19T08:48:51.989Z] =================================================================================================================== 00:31:05.241 [2024-11-19T08:48:51.989Z] Total : 29464.78 115.10 0.00 0.00 4337.20 2034.35 9502.72 00:31:05.241 { 00:31:05.241 "results": [ 00:31:05.241 { 00:31:05.241 "job": "nvme0n1", 00:31:05.241 "core_mask": "0x2", 00:31:05.241 "workload": "randwrite", 00:31:05.241 "status": "finished", 00:31:05.241 "queue_depth": 128, 00:31:05.241 "io_size": 4096, 00:31:05.241 "runtime": 2.004325, 00:31:05.241 "iops": 29464.782408042607, 00:31:05.241 "mibps": 115.09680628141643, 00:31:05.241 "io_failed": 0, 00:31:05.241 "io_timeout": 0, 00:31:05.241 "avg_latency_us": 4337.199448668236, 00:31:05.241 "min_latency_us": 2034.3466666666666, 00:31:05.241 "max_latency_us": 9502.72 00:31:05.241 } 00:31:05.241 ], 00:31:05.241 "core_count": 1 00:31:05.241 } 00:31:05.241 09:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:05.241 09:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:05.241 09:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:05.241 | .driver_specific 00:31:05.241 | .nvme_error 00:31:05.241 | .status_code 00:31:05.241 | .command_transient_transport_error' 00:31:05.241 09:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:05.502 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 231 > 0 )) 00:31:05.502 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 514524 00:31:05.502 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 514524 ']' 00:31:05.502 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 514524 00:31:05.502 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:05.502 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:05.502 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 514524 00:31:05.502 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:05.502 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:05.502 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 514524' 00:31:05.502 killing process with pid 514524 00:31:05.502 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 514524 00:31:05.502 Received shutdown signal, test time was about 2.000000 seconds 00:31:05.502 00:31:05.502 Latency(us) 00:31:05.502 [2024-11-19T08:48:52.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.502 [2024-11-19T08:48:52.250Z] =================================================================================================================== 00:31:05.502 [2024-11-19T08:48:52.250Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:05.502 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 514524 00:31:05.766 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:31:05.766 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:05.766 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:05.766 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:05.766 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:05.766 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=515306 00:31:05.766 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 515306 /var/tmp/bperf.sock 00:31:05.766 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 515306 ']' 00:31:05.766 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:05.766 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:05.766 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:05.766 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:05.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:05.766 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:05.766 09:48:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:05.766 [2024-11-19 09:48:52.322403] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:31:05.766 [2024-11-19 09:48:52.322460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid515306 ] 00:31:05.766 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:05.766 Zero copy mechanism will not be used. 00:31:05.766 [2024-11-19 09:48:52.403267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.766 [2024-11-19 09:48:52.432370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.707 09:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:06.707 09:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:06.707 09:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:06.707 09:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:06.707 09:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:06.707 09:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.707 09:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:06.707 09:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.707 09:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:06.707 09:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:06.968 nvme0n1 00:31:06.968 09:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:06.968 09:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.968 09:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:06.968 09:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.968 09:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:06.968 09:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:07.230 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:07.230 Zero copy mechanism will not be used. 00:31:07.230 Running I/O for 2 seconds... 00:31:07.230 [2024-11-19 09:48:53.790357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.230 [2024-11-19 09:48:53.790617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.230 [2024-11-19 09:48:53.790641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.230 [2024-11-19 09:48:53.799714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.230 [2024-11-19 09:48:53.799767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.230 [2024-11-19 09:48:53.799785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.230 [2024-11-19 09:48:53.807600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.230 [2024-11-19 09:48:53.807833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.230 [2024-11-19 09:48:53.807850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.230 [2024-11-19 09:48:53.815890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.230 [2024-11-19 09:48:53.815939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.230 [2024-11-19 09:48:53.815956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.230 [2024-11-19 09:48:53.825807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.230 [2024-11-19 09:48:53.825865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.230 [2024-11-19 09:48:53.825881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.230 [2024-11-19 09:48:53.830544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.230 [2024-11-19 09:48:53.830724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.230 [2024-11-19 09:48:53.830740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.230 [2024-11-19 09:48:53.838120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.230 [2024-11-19 09:48:53.838400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.230 [2024-11-19 09:48:53.838418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.230 [2024-11-19 09:48:53.842316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.230 [2024-11-19 09:48:53.842515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.230 [2024-11-19 09:48:53.842531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.230 [2024-11-19 09:48:53.846258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.230 [2024-11-19 09:48:53.846462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.230 [2024-11-19 09:48:53.846478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.230 [2024-11-19 09:48:53.849603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.230 [2024-11-19 09:48:53.849951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.230 [2024-11-19 09:48:53.849968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.230 [2024-11-19 09:48:53.853700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.230 [2024-11-19 09:48:53.853901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.230 [2024-11-19 09:48:53.853917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.230 [2024-11-19 09:48:53.858088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.230 [2024-11-19 09:48:53.858294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.230 [2024-11-19 09:48:53.858310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.230 [2024-11-19 09:48:53.862052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.230 [2024-11-19 09:48:53.862400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.230 [2024-11-19 09:48:53.862416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.230 [2024-11-19 09:48:53.865406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.230 [2024-11-19 09:48:53.865636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.230 [2024-11-19 09:48:53.865651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.231 [2024-11-19 09:48:53.869490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.231 [2024-11-19 09:48:53.869675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.231 [2024-11-19 09:48:53.869690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.231 [2024-11-19 09:48:53.873266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.231 [2024-11-19 09:48:53.873462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.231 [2024-11-19 09:48:53.873477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.231 [2024-11-19 09:48:53.877467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.231 [2024-11-19 09:48:53.877655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.231 [2024-11-19 09:48:53.877671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.231 [2024-11-19 09:48:53.881965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.231 [2024-11-19 09:48:53.882183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.231 [2024-11-19 09:48:53.882199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.231 [2024-11-19 09:48:53.889751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.231 [2024-11-19 09:48:53.889847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.231 [2024-11-19 09:48:53.889862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.231 [2024-11-19 09:48:53.900351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.231 [2024-11-19 09:48:53.900587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.231 [2024-11-19 09:48:53.900602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.231 [2024-11-19 09:48:53.910039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.231 [2024-11-19 09:48:53.910328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.231 [2024-11-19 09:48:53.910345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.231 [2024-11-19 09:48:53.920412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.231 [2024-11-19 09:48:53.920742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.231 [2024-11-19 09:48:53.920762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.231 [2024-11-19 09:48:53.930616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.231 [2024-11-19 09:48:53.930886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.231 [2024-11-19 09:48:53.930902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.231 [2024-11-19 09:48:53.940356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.231 [2024-11-19 09:48:53.940708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.231 [2024-11-19 09:48:53.940725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.231 [2024-11-19 09:48:53.950707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.231 [2024-11-19 09:48:53.950952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.231 [2024-11-19 09:48:53.950968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.231 [2024-11-19 09:48:53.961478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.231 [2024-11-19 09:48:53.961753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.231 [2024-11-19 09:48:53.961770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.231 [2024-11-19 09:48:53.972081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.231 [2024-11-19 09:48:53.972329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.231 [2024-11-19 09:48:53.972346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:53.979182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:53.979392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:53.979408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:53.989303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:53.989624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:53.989642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:53.999255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:53.999591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:53.999608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:54.009936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:54.010275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:54.010292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:54.020671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:54.020894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:54.020911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:54.031601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:54.031945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:54.031962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:54.042658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:54.042945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:54.042961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:54.053587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:54.053972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:54.053989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:54.065351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:54.065574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:54.065590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:54.075786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:54.076146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:54.076177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:54.086419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:54.086627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:54.086643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:54.096890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:54.097126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:54.097142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:54.107798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:54.108126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:54.108143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:54.117632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:54.117865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:54.117882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:54.128365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:54.128684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:54.128701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:54.139507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:54.139789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:54.139806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:54.150336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:54.150596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:54.150612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:54.160807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:54.161121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:54.161138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:54.171352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:54.171601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.493 [2024-11-19 09:48:54.171618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.493 [2024-11-19 09:48:54.182092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.493 [2024-11-19 09:48:54.182420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.494 [2024-11-19 09:48:54.182437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.494 [2024-11-19 09:48:54.193358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.494 [2024-11-19 09:48:54.193621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.494 [2024-11-19 09:48:54.193640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.494 [2024-11-19 09:48:54.204360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.494 [2024-11-19 09:48:54.204599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.494 [2024-11-19 09:48:54.204615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.494 [2024-11-19 09:48:54.215168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.494 [2024-11-19 09:48:54.215396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.494 [2024-11-19 09:48:54.215412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.494 [2024-11-19 09:48:54.225739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.494 [2024-11-19 09:48:54.226106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.494 [2024-11-19 09:48:54.226123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.494 [2024-11-19 09:48:54.236632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.494 [2024-11-19 09:48:54.236851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.494 [2024-11-19 09:48:54.236868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.247393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.247738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.247756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.257861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.258223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.258240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.265731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.265910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.265927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.275905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.276224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.276241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.282143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.282382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.282399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.288637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.288816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.288832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.295312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.295509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.295526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.301695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.301913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.301928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.308572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.308697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.308712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.317510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.317619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.317633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.327063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.327120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.327134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.333622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.333953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.333969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.338510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.338604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.338619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.347046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.347118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.347132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.354028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.354081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.354096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.361716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.361781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.361797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.370434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.370494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.370509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.377072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.377136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.377152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.756 [2024-11-19 09:48:54.384818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.756 [2024-11-19 09:48:54.384879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.756 [2024-11-19 09:48:54.384894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.757 [2024-11-19 09:48:54.394350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.757 [2024-11-19 09:48:54.394406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.757 [2024-11-19 09:48:54.394421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.757 [2024-11-19 09:48:54.401606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.757 [2024-11-19 09:48:54.401659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.757 [2024-11-19 09:48:54.401674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.757 [2024-11-19 09:48:54.409270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.757 [2024-11-19 09:48:54.409327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.757 [2024-11-19 09:48:54.409344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.757 [2024-11-19 09:48:54.416965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.757 [2024-11-19 09:48:54.417010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.757 [2024-11-19 09:48:54.417025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.757 [2024-11-19 09:48:54.424125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.757 [2024-11-19 09:48:54.424185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.757 [2024-11-19 09:48:54.424200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.757 [2024-11-19 09:48:54.432769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.757 [2024-11-19 09:48:54.432824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.757 [2024-11-19 09:48:54.432840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.757 [2024-11-19 09:48:54.441563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.757 [2024-11-19 09:48:54.441607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.757 [2024-11-19 09:48:54.441622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.757 [2024-11-19 09:48:54.450133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.757 [2024-11-19 09:48:54.450185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.757 [2024-11-19 09:48:54.450201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:07.757 [2024-11-19 09:48:54.459929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.757 [2024-11-19 09:48:54.459972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.757 [2024-11-19 09:48:54.459988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:07.757 [2024-11-19 09:48:54.468932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.757 [2024-11-19 09:48:54.468986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.757 [2024-11-19 09:48:54.469001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:07.757 [2024-11-19 09:48:54.479359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.757 [2024-11-19 09:48:54.479605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.757 [2024-11-19 09:48:54.479619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:07.757 [2024-11-19 09:48:54.490937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:07.757 [2024-11-19 09:48:54.491200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.757 [2024-11-19 09:48:54.491215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.020 [2024-11-19 09:48:54.501628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.020 [2024-11-19 09:48:54.501697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.020 [2024-11-19 09:48:54.501712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.020 [2024-11-19 09:48:54.513097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.020 [2024-11-19 09:48:54.513400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.020 [2024-11-19 09:48:54.513415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.020 [2024-11-19 09:48:54.523655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.020 [2024-11-19 09:48:54.523896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.020 [2024-11-19 09:48:54.523910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.020 [2024-11-19 09:48:54.534823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.020 [2024-11-19 09:48:54.535132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.020 [2024-11-19 09:48:54.535149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.020 [2024-11-19 09:48:54.546539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.020 [2024-11-19 09:48:54.546801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.020 [2024-11-19 09:48:54.546816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.020 [2024-11-19 09:48:54.557749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.020 [2024-11-19 09:48:54.558054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.020 [2024-11-19 09:48:54.558069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.020 [2024-11-19 09:48:54.569198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.020 [2024-11-19 09:48:54.569479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.020 [2024-11-19 09:48:54.569495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.020 [2024-11-19 09:48:54.580535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.020 [2024-11-19 09:48:54.580594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.020 [2024-11-19 09:48:54.580609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.020 [2024-11-19 09:48:54.590972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.020 [2024-11-19 09:48:54.591067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.020 [2024-11-19 09:48:54.591082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.020 [2024-11-19 09:48:54.601755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.020 [2024-11-19 09:48:54.602020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.020 [2024-11-19 09:48:54.602035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.020 [2024-11-19 09:48:54.613224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.020 [2024-11-19 09:48:54.613546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.020 [2024-11-19 09:48:54.613561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.020 [2024-11-19 09:48:54.621714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.020 [2024-11-19 09:48:54.621780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.020 [2024-11-19 09:48:54.621795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.020 [2024-11-19 09:48:54.630413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.020 [2024-11-19 09:48:54.630480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.020 [2024-11-19 09:48:54.630496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.020 [2024-11-19 09:48:54.640372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.020 [2024-11-19 09:48:54.640623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.020 [2024-11-19 09:48:54.640637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.020 [2024-11-19 09:48:54.649221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.020 [2024-11-19 09:48:54.649280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.020 [2024-11-19 09:48:54.649295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.020 [2024-11-19 09:48:54.657641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.020 [2024-11-19 09:48:54.657698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.020 [2024-11-19 09:48:54.657713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.020 [2024-11-19 09:48:54.666950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.020 [2024-11-19 09:48:54.667097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.021 [2024-11-19 09:48:54.667114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.021 [2024-11-19 09:48:54.674031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.021 [2024-11-19 09:48:54.674261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.021 [2024-11-19 09:48:54.674277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.021 [2024-11-19 09:48:54.682139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.021 [2024-11-19 09:48:54.682359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.021 [2024-11-19 09:48:54.682375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.021 [2024-11-19 09:48:54.689706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.021 [2024-11-19 09:48:54.689906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.021 [2024-11-19 09:48:54.689921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.021 [2024-11-19 09:48:54.695968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.021 [2024-11-19 09:48:54.696030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.021 [2024-11-19 09:48:54.696045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.021 [2024-11-19 09:48:54.699550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.021 [2024-11-19 09:48:54.699610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.021 [2024-11-19 09:48:54.699625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.021 [2024-11-19 09:48:54.707369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.021 [2024-11-19 09:48:54.707570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.021 [2024-11-19 09:48:54.707585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.021 [2024-11-19 09:48:54.714987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.021 [2024-11-19 09:48:54.715061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.021 [2024-11-19 09:48:54.715076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.021 [2024-11-19 09:48:54.723343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.021 [2024-11-19 09:48:54.723400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.021 [2024-11-19 09:48:54.723415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.021 [2024-11-19 09:48:54.732075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.021 [2024-11-19 09:48:54.732128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.021 [2024-11-19 09:48:54.732144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.021 [2024-11-19 09:48:54.737868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.021 [2024-11-19 09:48:54.737950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.021 [2024-11-19 09:48:54.737965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.021 [2024-11-19 09:48:54.746954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.021 [2024-11-19 09:48:54.747268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.021 [2024-11-19 09:48:54.747284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.021 [2024-11-19 09:48:54.757156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.021 [2024-11-19 09:48:54.757481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.021 [2024-11-19 09:48:54.757497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.283 [2024-11-19 09:48:54.768263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.283 [2024-11-19 09:48:54.768552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.283 [2024-11-19 09:48:54.768567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.283 [2024-11-19 09:48:54.779080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.283 [2024-11-19 09:48:54.779293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.283 [2024-11-19 09:48:54.779308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.283 3518.00 IOPS, 439.75 MiB/s [2024-11-19T08:48:55.031Z] [2024-11-19 09:48:54.790287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.283 [2024-11-19 09:48:54.790477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.283 [2024-11-19 09:48:54.790492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.283 [2024-11-19 09:48:54.801423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.283 [2024-11-19 09:48:54.801568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.283 [2024-11-19 09:48:54.801584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.283 [2024-11-19 09:48:54.812423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.283 [2024-11-19 09:48:54.812662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.283 [2024-11-19 09:48:54.812678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.283 [2024-11-19 09:48:54.823635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.283 [2024-11-19 09:48:54.823882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.283 [2024-11-19 09:48:54.823898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.283 [2024-11-19 09:48:54.833845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.283 [2024-11-19 09:48:54.834131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.283 [2024-11-19 09:48:54.834147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.283 [2024-11-19 09:48:54.844733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.283 [2024-11-19 09:48:54.844796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.283 [2024-11-19 09:48:54.844811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.283 [2024-11-19 09:48:54.856185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.283 [2024-11-19 09:48:54.856366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:54.856381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:54.866532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:54.866862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:54.866878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:54.875574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:54.875807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:54.875822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:54.885679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:54.885972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:54.885989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:54.895591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:54.895798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:54.895813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:54.906044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:54.906323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:54.906349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:54.917204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:54.917445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:54.917460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:54.927808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:54.928073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:54.928088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:54.938917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:54.939166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:54.939181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:54.949426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:54.949717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:54.949732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:54.959898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:54.960197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:54.960213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:54.971192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:54.971509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:54.971524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:54.981612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:54.981839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:54.981854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:54.991478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:54.991564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:54.991579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:55.001487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:55.001779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:55.001794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:55.007579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:55.007624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:55.007640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:55.014950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:55.015003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:55.015018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:55.018638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:55.018680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:55.018695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.284 [2024-11-19 09:48:55.026407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.284 [2024-11-19 09:48:55.026461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.284 [2024-11-19 09:48:55.026476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.034786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.034907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.034922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.043357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.043637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.043653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.052483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.052786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.052802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.060308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.060362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.060377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.064607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.064661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.064675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.068204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.068253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.068268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.071751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.071838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.071853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.075606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.075655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.075670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.079710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.079754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.079769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.083482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.083530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.083545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.087400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.087456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.087471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.091650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.091699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.091714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.095411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.095465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.095483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.099666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.099715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.099729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.103039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.103097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.103112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.106508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.106569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.106584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.110465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.110521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.110536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.114363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.114411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.547 [2024-11-19 09:48:55.114427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.547 [2024-11-19 09:48:55.119323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.547 [2024-11-19 09:48:55.119378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.119393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.124492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.124535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.124550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.128888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.128959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.128974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.138601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.138720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.138735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.147800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.148092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.148108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.158795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.159018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.159034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.169189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.169510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.169526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.179517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.179807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.179823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.190664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.190959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.190975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.201536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.201805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.201822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.212531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.212804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.212818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.221872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.222162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.222177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.231673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.232013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.232029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.241789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.242051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.242066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.251941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.252208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.252223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.262518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.262777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.262792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.273109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.273220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.273235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.279291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.279346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.279361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.282200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.282245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.282260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.548 [2024-11-19 09:48:55.286701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.548 [2024-11-19 09:48:55.286762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.548 [2024-11-19 09:48:55.286777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.811 [2024-11-19 09:48:55.294624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.811 [2024-11-19 09:48:55.294672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.811 [2024-11-19 09:48:55.294690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.811 [2024-11-19 09:48:55.297625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.811 [2024-11-19 09:48:55.297679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.811 [2024-11-19 09:48:55.297695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.811 [2024-11-19 09:48:55.300514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.811 [2024-11-19 09:48:55.300559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.811 [2024-11-19 09:48:55.300574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.811 [2024-11-19 09:48:55.303244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.811 [2024-11-19 09:48:55.303299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.811 [2024-11-19 09:48:55.303314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.811 [2024-11-19 09:48:55.306106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.811 [2024-11-19 09:48:55.306180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.811 [2024-11-19 09:48:55.306195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.811 [2024-11-19 09:48:55.309098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.811 [2024-11-19 09:48:55.309144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.811 [2024-11-19 09:48:55.309164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.312141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.312192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.312207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.315153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.315203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.315218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.317840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.317896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.317911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.320655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.320708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.320724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.323292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.323340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.323355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.325849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.325892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.325907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.328482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.328533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.328548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.331092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.331153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.331173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.334369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.334449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.334464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.337197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.337253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.337268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.339704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.339748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.339764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.342270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.342331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.342346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.344817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.344864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.344879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.347373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.347418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.347432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.349904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.349960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.349975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.352429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.352478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.352492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.355229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.355280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.355295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.357814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.357871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.357887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.360321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.360372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.360387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.363433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.363537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.363551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.371171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.371432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.371449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.381474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.381758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.812 [2024-11-19 09:48:55.381773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.812 [2024-11-19 09:48:55.391800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.812 [2024-11-19 09:48:55.392080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.392097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.402240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.402497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.402512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.412511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.412824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.412839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.423097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.423395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.423412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.433762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.434039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.434056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.444150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.444383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.444398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.454517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.454808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.454824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.465099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.465357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.465373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.474538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.474860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.474876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.484916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.484998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.485013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.493773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.494091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.494107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.498708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.498764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.498779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.501423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.501478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.501493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.504056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.504126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.504141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.506724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.506775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.506790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.509376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.509432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.509447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.512056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.512137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.512152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.515031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.515077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.515093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.517920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.517995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.518010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.520490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.520544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.520558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.522985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.523028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.523043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.525801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.525860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.525874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.529646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.529771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.813 [2024-11-19 09:48:55.529786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:08.813 [2024-11-19 09:48:55.534751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.813 [2024-11-19 09:48:55.535025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.814 [2024-11-19 09:48:55.535041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:08.814 [2024-11-19 09:48:55.541938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.814 [2024-11-19 09:48:55.542226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.814 [2024-11-19 09:48:55.542244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:08.814 [2024-11-19 09:48:55.548885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.814 [2024-11-19 09:48:55.548927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.814 [2024-11-19 09:48:55.548943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:08.814 [2024-11-19 09:48:55.552745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:08.814 [2024-11-19 09:48:55.552796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.814 [2024-11-19 09:48:55.552811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.556962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.557006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.557021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.560810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.560869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.560884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.564605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.564650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.564665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.568458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.568501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.568516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.572959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.573016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.573030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.577830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.577875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.577890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.581639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.581684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.581699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.587868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.587926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.587941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.591467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.591513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.591528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.595316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.595360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.595375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.599862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.599921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.599936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.603125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.603195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.603210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.608684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.608727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.608742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.612918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.612964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.612979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.617899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.618085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.618100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.624531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.624592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.624607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.628392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.628454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.628469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.631987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.632039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.632054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.635243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.635310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.635325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.638710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.638786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.638801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.642320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.642392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.642407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.645820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.645866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.645880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.649648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.649729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.649744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.652992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.653044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.653062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.656854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.656951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.656967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.077 [2024-11-19 09:48:55.660651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.077 [2024-11-19 09:48:55.660696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.077 [2024-11-19 09:48:55.660711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.078 [2024-11-19 09:48:55.665834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.078 [2024-11-19 09:48:55.665880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.078 [2024-11-19 09:48:55.665895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.078 [2024-11-19 09:48:55.673151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.078 [2024-11-19 09:48:55.673222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.078 [2024-11-19 09:48:55.673237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.078 [2024-11-19 09:48:55.677397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.078 [2024-11-19 09:48:55.677493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.078 [2024-11-19 09:48:55.677508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.078 [2024-11-19 09:48:55.684847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.078 [2024-11-19 09:48:55.684915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.078 [2024-11-19 09:48:55.684930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.078 [2024-11-19 09:48:55.692708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.078 [2024-11-19 09:48:55.692958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.078 [2024-11-19 09:48:55.692974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.078 [2024-11-19 09:48:55.703181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.078 [2024-11-19 09:48:55.703470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.078 [2024-11-19 09:48:55.703486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.078 [2024-11-19 09:48:55.712996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.078 [2024-11-19 09:48:55.713217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.078 [2024-11-19 09:48:55.713232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.078 [2024-11-19 09:48:55.723484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.078 [2024-11-19 09:48:55.723807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.078 [2024-11-19 09:48:55.723823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.078 [2024-11-19 09:48:55.734147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.078 [2024-11-19 09:48:55.734400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.078 [2024-11-19 09:48:55.734415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.078 [2024-11-19 09:48:55.744478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.078 [2024-11-19 09:48:55.744557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.078 [2024-11-19 09:48:55.744572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.078 [2024-11-19 09:48:55.752731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.078 [2024-11-19 09:48:55.752983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.078 [2024-11-19 09:48:55.752998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.078 [2024-11-19 09:48:55.763031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.078 [2024-11-19 09:48:55.763144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.078 [2024-11-19 09:48:55.763164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.078 [2024-11-19 09:48:55.773293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.078 [2024-11-19 09:48:55.773614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.078 [2024-11-19 09:48:55.773630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.078 [2024-11-19 09:48:55.783542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a8a860) with pdu=0x2000166ff3c8 00:31:09.078 [2024-11-19 09:48:55.783837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.078 [2024-11-19 09:48:55.783853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.078 4205.00 IOPS, 525.62 MiB/s 00:31:09.078 Latency(us) 00:31:09.078 [2024-11-19T08:48:55.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.078 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:09.078 nvme0n1 : 2.00 4203.60 525.45 0.00 0.00 3800.13 1181.01 13926.40 00:31:09.078 [2024-11-19T08:48:55.826Z] =================================================================================================================== 00:31:09.078 [2024-11-19T08:48:55.826Z] Total : 4203.60 525.45 0.00 0.00 3800.13 1181.01 13926.40 00:31:09.078 { 00:31:09.078 "results": [ 00:31:09.078 { 00:31:09.078 "job": "nvme0n1", 00:31:09.078 "core_mask": "0x2", 00:31:09.078 "workload": "randwrite", 00:31:09.078 "status": "finished", 00:31:09.078 "queue_depth": 16, 00:31:09.078 "io_size": 131072, 00:31:09.078 "runtime": 2.004472, 00:31:09.078 "iops": 4203.600748725849, 00:31:09.078 "mibps": 525.4500935907312, 00:31:09.078 "io_failed": 0, 00:31:09.078 "io_timeout": 0, 00:31:09.078 "avg_latency_us": 3800.1284911781, 00:31:09.078 "min_latency_us": 1181.0133333333333, 00:31:09.078 "max_latency_us": 13926.4 00:31:09.078 } 00:31:09.078 ], 00:31:09.078 "core_count": 1 00:31:09.078 } 00:31:09.078 09:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:09.078 09:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:09.078 09:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:09.078 | .driver_specific 00:31:09.078 | .nvme_error 00:31:09.078 | .status_code 00:31:09.078 | .command_transient_transport_error' 00:31:09.078 09:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:09.339 09:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 272 > 0 )) 00:31:09.339 09:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 515306 00:31:09.339 09:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 515306 ']' 00:31:09.339 09:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 515306 00:31:09.339 09:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:09.339 09:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:09.339 09:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 515306 00:31:09.339 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:09.339 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:09.339 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 515306' 00:31:09.339 killing process with pid 515306 00:31:09.339 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 515306 00:31:09.339 Received shutdown signal, test time was about 2.000000 seconds 00:31:09.339 00:31:09.339 Latency(us) 00:31:09.339 [2024-11-19T08:48:56.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.339 [2024-11-19T08:48:56.087Z] =================================================================================================================== 00:31:09.339 [2024-11-19T08:48:56.087Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:09.339 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 515306 00:31:09.599 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 512806 00:31:09.599 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 512806 ']' 00:31:09.599 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 512806 00:31:09.599 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:09.599 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:09.599 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 512806 00:31:09.599 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:09.599 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:09.599 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 512806' 00:31:09.599 killing process with pid 512806 00:31:09.599 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 512806 00:31:09.599 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 512806 00:31:09.599 00:31:09.599 real 0m16.650s 00:31:09.599 user 0m33.068s 00:31:09.599 sys 0m3.532s 00:31:09.599 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:09.599 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:09.599 ************************************ 00:31:09.599 END TEST nvmf_digest_error 00:31:09.599 ************************************ 00:31:09.860 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:31:09.860 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:31:09.860 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:09.860 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:31:09.860 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:09.860 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:31:09.860 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:09.860 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:09.860 rmmod nvme_tcp 00:31:09.860 rmmod nvme_fabrics 00:31:09.860 rmmod nvme_keyring 00:31:09.860 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 512806 ']' 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 512806 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 512806 ']' 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 512806 00:31:09.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (512806) - No such process 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 512806 is not found' 00:31:09.861 Process with pid 512806 is not found 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.861 09:48:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.775 09:48:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:12.036 00:31:12.036 real 0m43.437s 00:31:12.036 user 1m8.648s 00:31:12.036 sys 0m12.913s 00:31:12.036 09:48:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:12.036 09:48:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:12.036 ************************************ 00:31:12.036 END TEST nvmf_digest 00:31:12.036 ************************************ 00:31:12.036 09:48:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:31:12.036 09:48:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:31:12.036 09:48:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:31:12.036 09:48:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:12.036 09:48:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:12.036 09:48:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:12.036 09:48:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.036 ************************************ 00:31:12.036 START TEST nvmf_bdevperf 00:31:12.036 ************************************ 00:31:12.036 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:12.036 * Looking for test storage... 00:31:12.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:12.036 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:12.036 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:31:12.036 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:12.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.298 --rc genhtml_branch_coverage=1 00:31:12.298 --rc genhtml_function_coverage=1 00:31:12.298 --rc genhtml_legend=1 00:31:12.298 --rc geninfo_all_blocks=1 00:31:12.298 --rc geninfo_unexecuted_blocks=1 00:31:12.298 00:31:12.298 ' 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:12.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.298 --rc genhtml_branch_coverage=1 00:31:12.298 --rc genhtml_function_coverage=1 00:31:12.298 --rc genhtml_legend=1 00:31:12.298 --rc geninfo_all_blocks=1 00:31:12.298 --rc geninfo_unexecuted_blocks=1 00:31:12.298 00:31:12.298 ' 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:12.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.298 --rc genhtml_branch_coverage=1 00:31:12.298 --rc genhtml_function_coverage=1 00:31:12.298 --rc genhtml_legend=1 00:31:12.298 --rc geninfo_all_blocks=1 00:31:12.298 --rc geninfo_unexecuted_blocks=1 00:31:12.298 00:31:12.298 ' 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:12.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.298 --rc genhtml_branch_coverage=1 00:31:12.298 --rc genhtml_function_coverage=1 00:31:12.298 --rc genhtml_legend=1 00:31:12.298 --rc geninfo_all_blocks=1 00:31:12.298 --rc geninfo_unexecuted_blocks=1 00:31:12.298 00:31:12.298 ' 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:12.298 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:12.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:12.299 09:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:20.443 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:20.444 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:20.444 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:20.444 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:20.444 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.444 09:49:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:20.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:31:20.444 00:31:20.444 --- 10.0.0.2 ping statistics --- 00:31:20.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.444 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:31:20.444 00:31:20.444 --- 10.0.0.1 ping statistics --- 00:31:20.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.444 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=520150 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 520150 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 520150 ']' 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:20.444 09:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:20.444 [2024-11-19 09:49:06.360294] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:31:20.444 [2024-11-19 09:49:06.360361] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.444 [2024-11-19 09:49:06.458984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:20.444 [2024-11-19 09:49:06.511425] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.444 [2024-11-19 09:49:06.511480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.444 [2024-11-19 09:49:06.511489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.444 [2024-11-19 09:49:06.511497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.444 [2024-11-19 09:49:06.511503] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.444 [2024-11-19 09:49:06.513299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:20.444 [2024-11-19 09:49:06.513461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.445 [2024-11-19 09:49:06.513462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:20.445 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:20.445 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:31:20.445 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:20.445 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:20.445 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:20.706 [2024-11-19 09:49:07.224953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:20.706 Malloc0 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:20.706 [2024-11-19 09:49:07.301770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:20.706 { 00:31:20.706 "params": { 00:31:20.706 "name": "Nvme$subsystem", 00:31:20.706 "trtype": "$TEST_TRANSPORT", 00:31:20.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:20.706 "adrfam": "ipv4", 00:31:20.706 "trsvcid": "$NVMF_PORT", 00:31:20.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:20.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:20.706 "hdgst": ${hdgst:-false}, 00:31:20.706 "ddgst": ${ddgst:-false} 00:31:20.706 }, 00:31:20.706 "method": "bdev_nvme_attach_controller" 00:31:20.706 } 00:31:20.706 EOF 00:31:20.706 )") 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:31:20.706 09:49:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:20.706 "params": { 00:31:20.706 "name": "Nvme1", 00:31:20.706 "trtype": "tcp", 00:31:20.706 "traddr": "10.0.0.2", 00:31:20.706 "adrfam": "ipv4", 00:31:20.706 "trsvcid": "4420", 00:31:20.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:20.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:20.706 "hdgst": false, 00:31:20.706 "ddgst": false 00:31:20.706 }, 00:31:20.706 "method": "bdev_nvme_attach_controller" 00:31:20.706 }' 00:31:20.706 [2024-11-19 09:49:07.360277] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:31:20.706 [2024-11-19 09:49:07.360345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520409 ] 00:31:20.968 [2024-11-19 09:49:07.452733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.968 [2024-11-19 09:49:07.505237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.229 Running I/O for 1 seconds... 00:31:22.171 8434.00 IOPS, 32.95 MiB/s 00:31:22.171 Latency(us) 00:31:22.171 [2024-11-19T08:49:08.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.171 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:22.171 Verification LBA range: start 0x0 length 0x4000 00:31:22.171 Nvme1n1 : 1.01 8527.12 33.31 0.00 0.00 14931.43 1276.59 14199.47 00:31:22.171 [2024-11-19T08:49:08.919Z] =================================================================================================================== 00:31:22.171 [2024-11-19T08:49:08.919Z] Total : 8527.12 33.31 0.00 0.00 14931.43 1276.59 14199.47 00:31:22.432 09:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=520740 00:31:22.432 09:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:31:22.432 09:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:22.432 09:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:22.432 09:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:31:22.432 09:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:31:22.432 09:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:22.432 09:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:22.432 { 00:31:22.432 "params": { 00:31:22.432 "name": "Nvme$subsystem", 00:31:22.432 "trtype": "$TEST_TRANSPORT", 00:31:22.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.432 "adrfam": "ipv4", 00:31:22.432 "trsvcid": "$NVMF_PORT", 00:31:22.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.432 "hdgst": ${hdgst:-false}, 00:31:22.432 "ddgst": ${ddgst:-false} 00:31:22.432 }, 00:31:22.432 "method": "bdev_nvme_attach_controller" 00:31:22.432 } 00:31:22.432 EOF 00:31:22.432 )") 00:31:22.432 09:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:31:22.432 09:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:31:22.432 09:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:31:22.432 09:49:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:22.432 "params": { 00:31:22.432 "name": "Nvme1", 00:31:22.432 "trtype": "tcp", 00:31:22.432 "traddr": "10.0.0.2", 00:31:22.432 "adrfam": "ipv4", 00:31:22.432 "trsvcid": "4420", 00:31:22.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:22.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:22.432 "hdgst": false, 00:31:22.432 "ddgst": false 00:31:22.432 }, 00:31:22.432 "method": "bdev_nvme_attach_controller" 00:31:22.432 }' 00:31:22.432 [2024-11-19 09:49:09.018184] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:31:22.432 [2024-11-19 09:49:09.018239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520740 ] 00:31:22.432 [2024-11-19 09:49:09.106723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.432 [2024-11-19 09:49:09.141538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.692 Running I/O for 15 seconds... 00:31:25.021 10805.00 IOPS, 42.21 MiB/s [2024-11-19T08:49:12.033Z] 11107.00 IOPS, 43.39 MiB/s [2024-11-19T08:49:12.033Z] 09:49:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 520150 00:31:25.285 09:49:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:31:25.285 [2024-11-19 09:49:11.982529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.285 [2024-11-19 09:49:11.982570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.285 [2024-11-19 09:49:11.982589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.285 [2024-11-19 09:49:11.982600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.285 [2024-11-19 09:49:11.982611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.285 [2024-11-19 09:49:11.982620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.285 [2024-11-19 09:49:11.982631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.285 [2024-11-19 09:49:11.982639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.285 [2024-11-19 09:49:11.982655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.285 [2024-11-19 09:49:11.982671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.285 [2024-11-19 09:49:11.982684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.285 [2024-11-19 09:49:11.982692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.285 [2024-11-19 09:49:11.982704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.285 [2024-11-19 09:49:11.982712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.285 [2024-11-19 09:49:11.982722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.285 [2024-11-19 09:49:11.982729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.285 [2024-11-19 09:49:11.982739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.285 [2024-11-19 09:49:11.982747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.285 [2024-11-19 09:49:11.982758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.285 [2024-11-19 09:49:11.982765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.285 [2024-11-19 09:49:11.982775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.285 [2024-11-19 09:49:11.982784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.285 [2024-11-19 09:49:11.982795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.285 [2024-11-19 09:49:11.982806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.285 [2024-11-19 09:49:11.982817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.285 [2024-11-19 09:49:11.982826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.285 [2024-11-19 09:49:11.982837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.285 [2024-11-19 09:49:11.982846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.982858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.286 [2024-11-19 09:49:11.982868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.982880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.286 [2024-11-19 09:49:11.982890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.982902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.286 [2024-11-19 09:49:11.982912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.982924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.286 [2024-11-19 09:49:11.982931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.982942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.286 [2024-11-19 09:49:11.982950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.982959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.286 [2024-11-19 09:49:11.982966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.982976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.286 [2024-11-19 09:49:11.982983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.982993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.286 [2024-11-19 09:49:11.983001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.286 [2024-11-19 09:49:11.983018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.286 [2024-11-19 09:49:11.983035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.286 [2024-11-19 09:49:11.983442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.286 [2024-11-19 09:49:11.983449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.287 [2024-11-19 09:49:11.983585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.287 [2024-11-19 09:49:11.983635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.287 [2024-11-19 09:49:11.983652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.287 [2024-11-19 09:49:11.983669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.287 [2024-11-19 09:49:11.983685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.287 [2024-11-19 09:49:11.983702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.287 [2024-11-19 09:49:11.983719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.983986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.983995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.984004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.984011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.984021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.984028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.984037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.287 [2024-11-19 09:49:11.984045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.287 [2024-11-19 09:49:11.984054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.288 [2024-11-19 09:49:11.984128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.288 [2024-11-19 09:49:11.984145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.288 [2024-11-19 09:49:11.984167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.288 [2024-11-19 09:49:11.984183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.288 [2024-11-19 09:49:11.984200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.288 [2024-11-19 09:49:11.984218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.288 [2024-11-19 09:49:11.984235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.288 [2024-11-19 09:49:11.984251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.288 [2024-11-19 09:49:11.984267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.288 [2024-11-19 09:49:11.984616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.288 [2024-11-19 09:49:11.984623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.289 [2024-11-19 09:49:11.984633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.289 [2024-11-19 09:49:11.984642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.289 [2024-11-19 09:49:11.984651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.289 [2024-11-19 09:49:11.984659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.289 [2024-11-19 09:49:11.984668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.289 [2024-11-19 09:49:11.984675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.289 [2024-11-19 09:49:11.984685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.289 [2024-11-19 09:49:11.984692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.289 [2024-11-19 09:49:11.984702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.289 [2024-11-19 09:49:11.984709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.289 [2024-11-19 09:49:11.984718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.289 [2024-11-19 09:49:11.984725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.289 [2024-11-19 09:49:11.984735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.289 [2024-11-19 09:49:11.984743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.289 [2024-11-19 09:49:11.984753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.289 [2024-11-19 09:49:11.984760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.289 [2024-11-19 09:49:11.984770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.289 [2024-11-19 09:49:11.984777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.289 [2024-11-19 09:49:11.984786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.289 [2024-11-19 09:49:11.984794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.289 [2024-11-19 09:49:11.984802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1390 is same with the state(6) to be set 00:31:25.289 [2024-11-19 09:49:11.984812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:25.289 [2024-11-19 09:49:11.984818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:25.289 [2024-11-19 09:49:11.984824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96840 len:8 PRP1 0x0 PRP2 0x0 00:31:25.289 [2024-11-19 09:49:11.984832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.289 [2024-11-19 09:49:11.988416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.289 [2024-11-19 09:49:11.988470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.289 [2024-11-19 09:49:11.989406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.289 [2024-11-19 09:49:11.989444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.289 [2024-11-19 09:49:11.989457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.289 [2024-11-19 09:49:11.989703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.289 [2024-11-19 09:49:11.989927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.289 [2024-11-19 09:49:11.989936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.289 [2024-11-19 09:49:11.989946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.289 [2024-11-19 09:49:11.989955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.289 [2024-11-19 09:49:12.002506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.289 [2024-11-19 09:49:12.003063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.289 [2024-11-19 09:49:12.003084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.289 [2024-11-19 09:49:12.003092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.289 [2024-11-19 09:49:12.003318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.289 [2024-11-19 09:49:12.003539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.289 [2024-11-19 09:49:12.003547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.289 [2024-11-19 09:49:12.003555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.289 [2024-11-19 09:49:12.003562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.289 [2024-11-19 09:49:12.016532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.289 [2024-11-19 09:49:12.017087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.289 [2024-11-19 09:49:12.017104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.289 [2024-11-19 09:49:12.017112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.289 [2024-11-19 09:49:12.017338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.289 [2024-11-19 09:49:12.017559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.289 [2024-11-19 09:49:12.017567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.289 [2024-11-19 09:49:12.017575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.289 [2024-11-19 09:49:12.017582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.552 [2024-11-19 09:49:12.030451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.552 [2024-11-19 09:49:12.030997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.552 [2024-11-19 09:49:12.031016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.552 [2024-11-19 09:49:12.031028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.552 [2024-11-19 09:49:12.031256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.552 [2024-11-19 09:49:12.031476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.552 [2024-11-19 09:49:12.031485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.552 [2024-11-19 09:49:12.031493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.552 [2024-11-19 09:49:12.031500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.552 [2024-11-19 09:49:12.044274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.552 [2024-11-19 09:49:12.044807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.552 [2024-11-19 09:49:12.044825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.552 [2024-11-19 09:49:12.044835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.552 [2024-11-19 09:49:12.045055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.552 [2024-11-19 09:49:12.045283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.552 [2024-11-19 09:49:12.045293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.552 [2024-11-19 09:49:12.045301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.552 [2024-11-19 09:49:12.045308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.552 [2024-11-19 09:49:12.058077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.552 [2024-11-19 09:49:12.058705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.552 [2024-11-19 09:49:12.058748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.552 [2024-11-19 09:49:12.058759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.552 [2024-11-19 09:49:12.059001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.552 [2024-11-19 09:49:12.059236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.552 [2024-11-19 09:49:12.059246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.552 [2024-11-19 09:49:12.059255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.552 [2024-11-19 09:49:12.059264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.552 [2024-11-19 09:49:12.072049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.553 [2024-11-19 09:49:12.072688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.553 [2024-11-19 09:49:12.072733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.553 [2024-11-19 09:49:12.072745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.553 [2024-11-19 09:49:12.072987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.553 [2024-11-19 09:49:12.073226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.553 [2024-11-19 09:49:12.073237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.553 [2024-11-19 09:49:12.073245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.553 [2024-11-19 09:49:12.073254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.553 [2024-11-19 09:49:12.086048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.553 [2024-11-19 09:49:12.086633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.553 [2024-11-19 09:49:12.086656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.553 [2024-11-19 09:49:12.086664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.553 [2024-11-19 09:49:12.086884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.553 [2024-11-19 09:49:12.087104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.553 [2024-11-19 09:49:12.087114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.553 [2024-11-19 09:49:12.087121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.553 [2024-11-19 09:49:12.087129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.553 [2024-11-19 09:49:12.099918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.553 [2024-11-19 09:49:12.100587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.553 [2024-11-19 09:49:12.100634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.553 [2024-11-19 09:49:12.100646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.553 [2024-11-19 09:49:12.100890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.553 [2024-11-19 09:49:12.101116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.553 [2024-11-19 09:49:12.101125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.553 [2024-11-19 09:49:12.101133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.553 [2024-11-19 09:49:12.101141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.553 [2024-11-19 09:49:12.113742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.553 [2024-11-19 09:49:12.114431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.553 [2024-11-19 09:49:12.114483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.553 [2024-11-19 09:49:12.114495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.553 [2024-11-19 09:49:12.114742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.553 [2024-11-19 09:49:12.114968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.553 [2024-11-19 09:49:12.114977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.553 [2024-11-19 09:49:12.114986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.553 [2024-11-19 09:49:12.115000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.553 [2024-11-19 09:49:12.127645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.553 [2024-11-19 09:49:12.128256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.553 [2024-11-19 09:49:12.128282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.553 [2024-11-19 09:49:12.128291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.553 [2024-11-19 09:49:12.128514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.553 [2024-11-19 09:49:12.128734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.553 [2024-11-19 09:49:12.128745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.553 [2024-11-19 09:49:12.128753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.553 [2024-11-19 09:49:12.128761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.553 [2024-11-19 09:49:12.141573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.553 [2024-11-19 09:49:12.142146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.553 [2024-11-19 09:49:12.142179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.553 [2024-11-19 09:49:12.142187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.553 [2024-11-19 09:49:12.142409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.553 [2024-11-19 09:49:12.142630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.553 [2024-11-19 09:49:12.142641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.553 [2024-11-19 09:49:12.142649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.553 [2024-11-19 09:49:12.142657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.553 [2024-11-19 09:49:12.155475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.553 [2024-11-19 09:49:12.156048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.553 [2024-11-19 09:49:12.156070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.553 [2024-11-19 09:49:12.156078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.553 [2024-11-19 09:49:12.156307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.553 [2024-11-19 09:49:12.156529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.553 [2024-11-19 09:49:12.156537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.553 [2024-11-19 09:49:12.156545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.553 [2024-11-19 09:49:12.156553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.553 [2024-11-19 09:49:12.169379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.553 [2024-11-19 09:49:12.169954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.553 [2024-11-19 09:49:12.169978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.553 [2024-11-19 09:49:12.169987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.553 [2024-11-19 09:49:12.170220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.553 [2024-11-19 09:49:12.170448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.553 [2024-11-19 09:49:12.170461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.554 [2024-11-19 09:49:12.170470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.554 [2024-11-19 09:49:12.170478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.554 [2024-11-19 09:49:12.183290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.554 [2024-11-19 09:49:12.183835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.554 [2024-11-19 09:49:12.183859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.554 [2024-11-19 09:49:12.183869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.554 [2024-11-19 09:49:12.184090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.554 [2024-11-19 09:49:12.184321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.554 [2024-11-19 09:49:12.184341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.554 [2024-11-19 09:49:12.184350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.554 [2024-11-19 09:49:12.184362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.554 [2024-11-19 09:49:12.197178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.554 [2024-11-19 09:49:12.197736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.554 [2024-11-19 09:49:12.197762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.554 [2024-11-19 09:49:12.197771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.554 [2024-11-19 09:49:12.197994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.554 [2024-11-19 09:49:12.198227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.554 [2024-11-19 09:49:12.198237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.554 [2024-11-19 09:49:12.198247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.554 [2024-11-19 09:49:12.198254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.554 [2024-11-19 09:49:12.211057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.554 [2024-11-19 09:49:12.211743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.554 [2024-11-19 09:49:12.211804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.554 [2024-11-19 09:49:12.211825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.554 [2024-11-19 09:49:12.212080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.554 [2024-11-19 09:49:12.212320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.554 [2024-11-19 09:49:12.212330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.554 [2024-11-19 09:49:12.212339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.554 [2024-11-19 09:49:12.212348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.554 [2024-11-19 09:49:12.224968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.554 [2024-11-19 09:49:12.225698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.554 [2024-11-19 09:49:12.225760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.554 [2024-11-19 09:49:12.225773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.554 [2024-11-19 09:49:12.226029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.554 [2024-11-19 09:49:12.226266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.554 [2024-11-19 09:49:12.226277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.554 [2024-11-19 09:49:12.226286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.554 [2024-11-19 09:49:12.226294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.554 [2024-11-19 09:49:12.238870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.554 [2024-11-19 09:49:12.239522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.554 [2024-11-19 09:49:12.239585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.554 [2024-11-19 09:49:12.239597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.554 [2024-11-19 09:49:12.239852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.554 [2024-11-19 09:49:12.240080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.554 [2024-11-19 09:49:12.240091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.554 [2024-11-19 09:49:12.240100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.554 [2024-11-19 09:49:12.240109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.554 [2024-11-19 09:49:12.252702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.554 [2024-11-19 09:49:12.253345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.554 [2024-11-19 09:49:12.253375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.554 [2024-11-19 09:49:12.253384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.554 [2024-11-19 09:49:12.253607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.554 [2024-11-19 09:49:12.253837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.554 [2024-11-19 09:49:12.253850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.554 [2024-11-19 09:49:12.253858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.554 [2024-11-19 09:49:12.253866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.554 [2024-11-19 09:49:12.266649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.554 [2024-11-19 09:49:12.267279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.554 [2024-11-19 09:49:12.267341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.554 [2024-11-19 09:49:12.267355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.554 [2024-11-19 09:49:12.267611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.554 [2024-11-19 09:49:12.267839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.554 [2024-11-19 09:49:12.267850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.554 [2024-11-19 09:49:12.267859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.554 [2024-11-19 09:49:12.267869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.554 [2024-11-19 09:49:12.280458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.554 [2024-11-19 09:49:12.281052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.554 [2024-11-19 09:49:12.281081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.554 [2024-11-19 09:49:12.281090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.554 [2024-11-19 09:49:12.281325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.554 [2024-11-19 09:49:12.281547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.554 [2024-11-19 09:49:12.281557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.554 [2024-11-19 09:49:12.281565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.554 [2024-11-19 09:49:12.281573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.555 [2024-11-19 09:49:12.294336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.555 [2024-11-19 09:49:12.294915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.555 [2024-11-19 09:49:12.294939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.555 [2024-11-19 09:49:12.294948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.555 [2024-11-19 09:49:12.295179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.555 [2024-11-19 09:49:12.295401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.555 [2024-11-19 09:49:12.295412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.555 [2024-11-19 09:49:12.295421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.555 [2024-11-19 09:49:12.295436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.818 [2024-11-19 09:49:12.308207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.818 [2024-11-19 09:49:12.308821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.818 [2024-11-19 09:49:12.308844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.818 [2024-11-19 09:49:12.308853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.818 [2024-11-19 09:49:12.309074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.818 [2024-11-19 09:49:12.309303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.818 [2024-11-19 09:49:12.309314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.818 [2024-11-19 09:49:12.309322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.818 [2024-11-19 09:49:12.309331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.818 [2024-11-19 09:49:12.322093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.818 [2024-11-19 09:49:12.322735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.818 [2024-11-19 09:49:12.322797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.818 [2024-11-19 09:49:12.322810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.818 [2024-11-19 09:49:12.323065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.818 [2024-11-19 09:49:12.323318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.818 [2024-11-19 09:49:12.323329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.818 [2024-11-19 09:49:12.323339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.818 [2024-11-19 09:49:12.323348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.818 [2024-11-19 09:49:12.335934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.818 [2024-11-19 09:49:12.336658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.818 [2024-11-19 09:49:12.336721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.818 [2024-11-19 09:49:12.336734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.818 [2024-11-19 09:49:12.336989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.818 [2024-11-19 09:49:12.337230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.818 [2024-11-19 09:49:12.337241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.818 [2024-11-19 09:49:12.337250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.818 [2024-11-19 09:49:12.337259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.818 [2024-11-19 09:49:12.349827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.818 [2024-11-19 09:49:12.350568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.818 [2024-11-19 09:49:12.350630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.818 [2024-11-19 09:49:12.350643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.818 [2024-11-19 09:49:12.350898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.818 [2024-11-19 09:49:12.351125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.818 [2024-11-19 09:49:12.351135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.818 [2024-11-19 09:49:12.351144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.818 [2024-11-19 09:49:12.351153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.818 [2024-11-19 09:49:12.363733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.818 [2024-11-19 09:49:12.364474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.819 [2024-11-19 09:49:12.364537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.819 [2024-11-19 09:49:12.364550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.819 [2024-11-19 09:49:12.364804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.819 [2024-11-19 09:49:12.365032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.819 [2024-11-19 09:49:12.365041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.819 [2024-11-19 09:49:12.365051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.819 [2024-11-19 09:49:12.365059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.819 [2024-11-19 09:49:12.377639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.819 [2024-11-19 09:49:12.378275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.819 [2024-11-19 09:49:12.378338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.819 [2024-11-19 09:49:12.378352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.819 [2024-11-19 09:49:12.378608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.819 [2024-11-19 09:49:12.378835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.819 [2024-11-19 09:49:12.378846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.819 [2024-11-19 09:49:12.378856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.819 [2024-11-19 09:49:12.378864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.819 [2024-11-19 09:49:12.391452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.819 [2024-11-19 09:49:12.392140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.819 [2024-11-19 09:49:12.392213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.819 [2024-11-19 09:49:12.392233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.819 [2024-11-19 09:49:12.392488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.819 [2024-11-19 09:49:12.392716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.819 [2024-11-19 09:49:12.392725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.819 [2024-11-19 09:49:12.392735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.819 [2024-11-19 09:49:12.392744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.819 [2024-11-19 09:49:12.405325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.819 [2024-11-19 09:49:12.406045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.819 [2024-11-19 09:49:12.406108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.819 [2024-11-19 09:49:12.406121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.819 [2024-11-19 09:49:12.406388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.819 [2024-11-19 09:49:12.406619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.819 [2024-11-19 09:49:12.406628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.819 [2024-11-19 09:49:12.406637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.819 [2024-11-19 09:49:12.406646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.819 [2024-11-19 09:49:12.419234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.819 [2024-11-19 09:49:12.419901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.819 [2024-11-19 09:49:12.419964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.819 [2024-11-19 09:49:12.419977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.819 [2024-11-19 09:49:12.420243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.819 [2024-11-19 09:49:12.420471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.819 [2024-11-19 09:49:12.420482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.819 [2024-11-19 09:49:12.420491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.819 [2024-11-19 09:49:12.420499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.819 9467.33 IOPS, 36.98 MiB/s [2024-11-19T08:49:12.567Z] [2024-11-19 09:49:12.433106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.819 [2024-11-19 09:49:12.433836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.819 [2024-11-19 09:49:12.433898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.819 [2024-11-19 09:49:12.433911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.819 [2024-11-19 09:49:12.434179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.819 [2024-11-19 09:49:12.434415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.819 [2024-11-19 09:49:12.434424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.819 [2024-11-19 09:49:12.434433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.819 [2024-11-19 09:49:12.434442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.819 [2024-11-19 09:49:12.447019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.819 [2024-11-19 09:49:12.447618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.819 [2024-11-19 09:49:12.447648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.819 [2024-11-19 09:49:12.447656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.819 [2024-11-19 09:49:12.447880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.819 [2024-11-19 09:49:12.448101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.819 [2024-11-19 09:49:12.448110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.819 [2024-11-19 09:49:12.448118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.819 [2024-11-19 09:49:12.448127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.819 [2024-11-19 09:49:12.460899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.819 [2024-11-19 09:49:12.461548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.819 [2024-11-19 09:49:12.461609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.819 [2024-11-19 09:49:12.461622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.819 [2024-11-19 09:49:12.461877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.819 [2024-11-19 09:49:12.462105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.819 [2024-11-19 09:49:12.462114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.819 [2024-11-19 09:49:12.462124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.819 [2024-11-19 09:49:12.462133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.819 [2024-11-19 09:49:12.474717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.819 [2024-11-19 09:49:12.475328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.819 [2024-11-19 09:49:12.475359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.819 [2024-11-19 09:49:12.475368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.819 [2024-11-19 09:49:12.475592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.819 [2024-11-19 09:49:12.475813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.820 [2024-11-19 09:49:12.475824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.820 [2024-11-19 09:49:12.475839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.820 [2024-11-19 09:49:12.475847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.820 [2024-11-19 09:49:12.488617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.820 [2024-11-19 09:49:12.489272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.820 [2024-11-19 09:49:12.489335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.820 [2024-11-19 09:49:12.489348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.820 [2024-11-19 09:49:12.489603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.820 [2024-11-19 09:49:12.489830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.820 [2024-11-19 09:49:12.489841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.820 [2024-11-19 09:49:12.489850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.820 [2024-11-19 09:49:12.489859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.820 [2024-11-19 09:49:12.502459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.820 [2024-11-19 09:49:12.503191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.820 [2024-11-19 09:49:12.503254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.820 [2024-11-19 09:49:12.503266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.820 [2024-11-19 09:49:12.503520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.820 [2024-11-19 09:49:12.503748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.820 [2024-11-19 09:49:12.503758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.820 [2024-11-19 09:49:12.503767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.820 [2024-11-19 09:49:12.503776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.820 [2024-11-19 09:49:12.516359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.820 [2024-11-19 09:49:12.517083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.820 [2024-11-19 09:49:12.517145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.820 [2024-11-19 09:49:12.517171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.820 [2024-11-19 09:49:12.517427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.820 [2024-11-19 09:49:12.517655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.820 [2024-11-19 09:49:12.517664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.820 [2024-11-19 09:49:12.517674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.820 [2024-11-19 09:49:12.517683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.820 [2024-11-19 09:49:12.530291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.820 [2024-11-19 09:49:12.531028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.820 [2024-11-19 09:49:12.531090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.820 [2024-11-19 09:49:12.531103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.820 [2024-11-19 09:49:12.531371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.820 [2024-11-19 09:49:12.531599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.820 [2024-11-19 09:49:12.531608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.820 [2024-11-19 09:49:12.531617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.820 [2024-11-19 09:49:12.531626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.820 [2024-11-19 09:49:12.544197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.820 [2024-11-19 09:49:12.544929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.820 [2024-11-19 09:49:12.544989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.820 [2024-11-19 09:49:12.545002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.820 [2024-11-19 09:49:12.545271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.820 [2024-11-19 09:49:12.545499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.820 [2024-11-19 09:49:12.545510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.820 [2024-11-19 09:49:12.545519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.820 [2024-11-19 09:49:12.545528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:25.820 [2024-11-19 09:49:12.558096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:25.820 [2024-11-19 09:49:12.558800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.820 [2024-11-19 09:49:12.558862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:25.820 [2024-11-19 09:49:12.558875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:25.820 [2024-11-19 09:49:12.559129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:25.820 [2024-11-19 09:49:12.559369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:25.820 [2024-11-19 09:49:12.559380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:25.820 [2024-11-19 09:49:12.559389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:25.820 [2024-11-19 09:49:12.559398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.082 [2024-11-19 09:49:12.571979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.082 [2024-11-19 09:49:12.572709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.082 [2024-11-19 09:49:12.572771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.082 [2024-11-19 09:49:12.572792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.082 [2024-11-19 09:49:12.573046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.082 [2024-11-19 09:49:12.573287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.082 [2024-11-19 09:49:12.573298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.082 [2024-11-19 09:49:12.573307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.082 [2024-11-19 09:49:12.573316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.082 [2024-11-19 09:49:12.585883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.082 [2024-11-19 09:49:12.586596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.082 [2024-11-19 09:49:12.586658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.082 [2024-11-19 09:49:12.586671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.082 [2024-11-19 09:49:12.586926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.082 [2024-11-19 09:49:12.587153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.082 [2024-11-19 09:49:12.587177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.082 [2024-11-19 09:49:12.587186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.082 [2024-11-19 09:49:12.587196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.082 [2024-11-19 09:49:12.599763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.082 [2024-11-19 09:49:12.600393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.083 [2024-11-19 09:49:12.600422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.083 [2024-11-19 09:49:12.600432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.083 [2024-11-19 09:49:12.600655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.083 [2024-11-19 09:49:12.600876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.083 [2024-11-19 09:49:12.600885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.083 [2024-11-19 09:49:12.600893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.083 [2024-11-19 09:49:12.600902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.083 [2024-11-19 09:49:12.613669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.083 [2024-11-19 09:49:12.614242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.083 [2024-11-19 09:49:12.614268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.083 [2024-11-19 09:49:12.614276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.083 [2024-11-19 09:49:12.614498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.083 [2024-11-19 09:49:12.614727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.083 [2024-11-19 09:49:12.614736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.083 [2024-11-19 09:49:12.614744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.083 [2024-11-19 09:49:12.614751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.083 [2024-11-19 09:49:12.627555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.083 [2024-11-19 09:49:12.628208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.083 [2024-11-19 09:49:12.628271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.083 [2024-11-19 09:49:12.628284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.083 [2024-11-19 09:49:12.628539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.083 [2024-11-19 09:49:12.628766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.083 [2024-11-19 09:49:12.628775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.083 [2024-11-19 09:49:12.628784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.083 [2024-11-19 09:49:12.628793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.083 [2024-11-19 09:49:12.641382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.083 [2024-11-19 09:49:12.642010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.083 [2024-11-19 09:49:12.642069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.083 [2024-11-19 09:49:12.642082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.083 [2024-11-19 09:49:12.642356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.083 [2024-11-19 09:49:12.642585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.083 [2024-11-19 09:49:12.642595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.083 [2024-11-19 09:49:12.642603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.083 [2024-11-19 09:49:12.642612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.083 [2024-11-19 09:49:12.655391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.083 [2024-11-19 09:49:12.656084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.083 [2024-11-19 09:49:12.656145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.083 [2024-11-19 09:49:12.656172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.083 [2024-11-19 09:49:12.656428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.083 [2024-11-19 09:49:12.656655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.083 [2024-11-19 09:49:12.656665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.083 [2024-11-19 09:49:12.656682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.083 [2024-11-19 09:49:12.656691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.083 [2024-11-19 09:49:12.669385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.083 [2024-11-19 09:49:12.670140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.083 [2024-11-19 09:49:12.670212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.083 [2024-11-19 09:49:12.670225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.083 [2024-11-19 09:49:12.670480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.083 [2024-11-19 09:49:12.670707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.083 [2024-11-19 09:49:12.670716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.083 [2024-11-19 09:49:12.670725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.083 [2024-11-19 09:49:12.670734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.083 [2024-11-19 09:49:12.683317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.083 [2024-11-19 09:49:12.684038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.083 [2024-11-19 09:49:12.684100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.083 [2024-11-19 09:49:12.684113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.083 [2024-11-19 09:49:12.684382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.083 [2024-11-19 09:49:12.684611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.083 [2024-11-19 09:49:12.684620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.083 [2024-11-19 09:49:12.684629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.083 [2024-11-19 09:49:12.684638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.083 [2024-11-19 09:49:12.697211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.083 [2024-11-19 09:49:12.697892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.083 [2024-11-19 09:49:12.697954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.083 [2024-11-19 09:49:12.697967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.083 [2024-11-19 09:49:12.698236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.083 [2024-11-19 09:49:12.698465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.083 [2024-11-19 09:49:12.698475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.083 [2024-11-19 09:49:12.698484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.083 [2024-11-19 09:49:12.698493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.083 [2024-11-19 09:49:12.711061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.083 [2024-11-19 09:49:12.711753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.083 [2024-11-19 09:49:12.711814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.083 [2024-11-19 09:49:12.711827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.083 [2024-11-19 09:49:12.712081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.083 [2024-11-19 09:49:12.712323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.083 [2024-11-19 09:49:12.712334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.083 [2024-11-19 09:49:12.712343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.083 [2024-11-19 09:49:12.712352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.083 [2024-11-19 09:49:12.724919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.084 [2024-11-19 09:49:12.725635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.084 [2024-11-19 09:49:12.725697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.084 [2024-11-19 09:49:12.725710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.084 [2024-11-19 09:49:12.725965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.084 [2024-11-19 09:49:12.726235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.084 [2024-11-19 09:49:12.726247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.084 [2024-11-19 09:49:12.726256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.084 [2024-11-19 09:49:12.726265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.084 [2024-11-19 09:49:12.738905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.084 [2024-11-19 09:49:12.739540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.084 [2024-11-19 09:49:12.739569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.084 [2024-11-19 09:49:12.739578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.084 [2024-11-19 09:49:12.739800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.084 [2024-11-19 09:49:12.740023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.084 [2024-11-19 09:49:12.740033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.084 [2024-11-19 09:49:12.740042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.084 [2024-11-19 09:49:12.740049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.084 [2024-11-19 09:49:12.752844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.084 [2024-11-19 09:49:12.753419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.084 [2024-11-19 09:49:12.753444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.084 [2024-11-19 09:49:12.753460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.084 [2024-11-19 09:49:12.753683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.084 [2024-11-19 09:49:12.753904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.084 [2024-11-19 09:49:12.753914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.084 [2024-11-19 09:49:12.753923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.084 [2024-11-19 09:49:12.753931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.084 [2024-11-19 09:49:12.766705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.084 [2024-11-19 09:49:12.767400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.084 [2024-11-19 09:49:12.767463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.084 [2024-11-19 09:49:12.767476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.084 [2024-11-19 09:49:12.767731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.084 [2024-11-19 09:49:12.767958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.084 [2024-11-19 09:49:12.767968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.084 [2024-11-19 09:49:12.767977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.084 [2024-11-19 09:49:12.767986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.084 [2024-11-19 09:49:12.780576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.084 [2024-11-19 09:49:12.781255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.084 [2024-11-19 09:49:12.781318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.084 [2024-11-19 09:49:12.781331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.084 [2024-11-19 09:49:12.781586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.084 [2024-11-19 09:49:12.781814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.084 [2024-11-19 09:49:12.781824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.084 [2024-11-19 09:49:12.781833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.084 [2024-11-19 09:49:12.781843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.084 [2024-11-19 09:49:12.794423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.084 [2024-11-19 09:49:12.795099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.084 [2024-11-19 09:49:12.795173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.084 [2024-11-19 09:49:12.795187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.084 [2024-11-19 09:49:12.795442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.084 [2024-11-19 09:49:12.795676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.084 [2024-11-19 09:49:12.795686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.084 [2024-11-19 09:49:12.795695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.084 [2024-11-19 09:49:12.795704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.084 [2024-11-19 09:49:12.808282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.084 [2024-11-19 09:49:12.809015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.084 [2024-11-19 09:49:12.809076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.084 [2024-11-19 09:49:12.809089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.084 [2024-11-19 09:49:12.809359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.084 [2024-11-19 09:49:12.809587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.084 [2024-11-19 09:49:12.809596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.084 [2024-11-19 09:49:12.809605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.084 [2024-11-19 09:49:12.809614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.084 [2024-11-19 09:49:12.822187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.084 [2024-11-19 09:49:12.822895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.084 [2024-11-19 09:49:12.822957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.084 [2024-11-19 09:49:12.822969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.084 [2024-11-19 09:49:12.823238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.084 [2024-11-19 09:49:12.823467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.084 [2024-11-19 09:49:12.823477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.085 [2024-11-19 09:49:12.823486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.085 [2024-11-19 09:49:12.823495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.347 [2024-11-19 09:49:12.836115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.347 [2024-11-19 09:49:12.836844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.347 [2024-11-19 09:49:12.836907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.347 [2024-11-19 09:49:12.836919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.347 [2024-11-19 09:49:12.837188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.347 [2024-11-19 09:49:12.837417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.347 [2024-11-19 09:49:12.837426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.347 [2024-11-19 09:49:12.837442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.347 [2024-11-19 09:49:12.837451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.347 [2024-11-19 09:49:12.850029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.347 [2024-11-19 09:49:12.850724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.347 [2024-11-19 09:49:12.850785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.347 [2024-11-19 09:49:12.850797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.347 [2024-11-19 09:49:12.851052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.347 [2024-11-19 09:49:12.851294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.347 [2024-11-19 09:49:12.851305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.347 [2024-11-19 09:49:12.851315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.348 [2024-11-19 09:49:12.851324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.348 [2024-11-19 09:49:12.863709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.348 [2024-11-19 09:49:12.864289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.348 [2024-11-19 09:49:12.864345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.348 [2024-11-19 09:49:12.864355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.348 [2024-11-19 09:49:12.864539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.348 [2024-11-19 09:49:12.864697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.348 [2024-11-19 09:49:12.864705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.348 [2024-11-19 09:49:12.864711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.348 [2024-11-19 09:49:12.864718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.348 [2024-11-19 09:49:12.876371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.348 [2024-11-19 09:49:12.876972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.348 [2024-11-19 09:49:12.877024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.348 [2024-11-19 09:49:12.877033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.348 [2024-11-19 09:49:12.877226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.348 [2024-11-19 09:49:12.877384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.348 [2024-11-19 09:49:12.877391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.348 [2024-11-19 09:49:12.877398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.348 [2024-11-19 09:49:12.877405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.348 [2024-11-19 09:49:12.889028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.348 [2024-11-19 09:49:12.889563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.348 [2024-11-19 09:49:12.889584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.348 [2024-11-19 09:49:12.889590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.348 [2024-11-19 09:49:12.889744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.348 [2024-11-19 09:49:12.889896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.348 [2024-11-19 09:49:12.889902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.348 [2024-11-19 09:49:12.889908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.348 [2024-11-19 09:49:12.889914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.348 [2024-11-19 09:49:12.901680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.348 [2024-11-19 09:49:12.902200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.348 [2024-11-19 09:49:12.902218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.348 [2024-11-19 09:49:12.902223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.348 [2024-11-19 09:49:12.902376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.348 [2024-11-19 09:49:12.902528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.348 [2024-11-19 09:49:12.902534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.348 [2024-11-19 09:49:12.902540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.348 [2024-11-19 09:49:12.902545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.348 [2024-11-19 09:49:12.914299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.348 [2024-11-19 09:49:12.914863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.348 [2024-11-19 09:49:12.914905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.348 [2024-11-19 09:49:12.914915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.348 [2024-11-19 09:49:12.915090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.348 [2024-11-19 09:49:12.915256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.348 [2024-11-19 09:49:12.915263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.348 [2024-11-19 09:49:12.915270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.348 [2024-11-19 09:49:12.915276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.348 [2024-11-19 09:49:12.927054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.348 [2024-11-19 09:49:12.927686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.348 [2024-11-19 09:49:12.927725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.348 [2024-11-19 09:49:12.927738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.348 [2024-11-19 09:49:12.927909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.348 [2024-11-19 09:49:12.928065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.348 [2024-11-19 09:49:12.928071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.348 [2024-11-19 09:49:12.928077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.348 [2024-11-19 09:49:12.928084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.348 [2024-11-19 09:49:12.939698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.348 [2024-11-19 09:49:12.940246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.348 [2024-11-19 09:49:12.940283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.348 [2024-11-19 09:49:12.940292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.348 [2024-11-19 09:49:12.940465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.348 [2024-11-19 09:49:12.940620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.348 [2024-11-19 09:49:12.940626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.348 [2024-11-19 09:49:12.940632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.348 [2024-11-19 09:49:12.940638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.348 [2024-11-19 09:49:12.952394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.348 [2024-11-19 09:49:12.952984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.348 [2024-11-19 09:49:12.953019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.348 [2024-11-19 09:49:12.953027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.348 [2024-11-19 09:49:12.953205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.348 [2024-11-19 09:49:12.953361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.348 [2024-11-19 09:49:12.953367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.348 [2024-11-19 09:49:12.953373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.349 [2024-11-19 09:49:12.953379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.349 [2024-11-19 09:49:12.965129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.349 [2024-11-19 09:49:12.965757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.349 [2024-11-19 09:49:12.965791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.349 [2024-11-19 09:49:12.965799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.349 [2024-11-19 09:49:12.965967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.349 [2024-11-19 09:49:12.966125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.349 [2024-11-19 09:49:12.966132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.349 [2024-11-19 09:49:12.966138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.349 [2024-11-19 09:49:12.966143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.349 [2024-11-19 09:49:12.977751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.349 [2024-11-19 09:49:12.978262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.349 [2024-11-19 09:49:12.978294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.349 [2024-11-19 09:49:12.978303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.349 [2024-11-19 09:49:12.978473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.349 [2024-11-19 09:49:12.978628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.349 [2024-11-19 09:49:12.978634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.349 [2024-11-19 09:49:12.978640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.349 [2024-11-19 09:49:12.978646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.349 [2024-11-19 09:49:12.990423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.349 [2024-11-19 09:49:12.991027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.349 [2024-11-19 09:49:12.991059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.349 [2024-11-19 09:49:12.991068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.349 [2024-11-19 09:49:12.991243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.349 [2024-11-19 09:49:12.991399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.349 [2024-11-19 09:49:12.991406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.349 [2024-11-19 09:49:12.991412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.349 [2024-11-19 09:49:12.991418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.349 [2024-11-19 09:49:13.003165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.349 [2024-11-19 09:49:13.003754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.349 [2024-11-19 09:49:13.003785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.349 [2024-11-19 09:49:13.003793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.349 [2024-11-19 09:49:13.003960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.349 [2024-11-19 09:49:13.004114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.349 [2024-11-19 09:49:13.004121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.349 [2024-11-19 09:49:13.004133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.349 [2024-11-19 09:49:13.004139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.349 [2024-11-19 09:49:13.016069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.349 [2024-11-19 09:49:13.016590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.349 [2024-11-19 09:49:13.016605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.349 [2024-11-19 09:49:13.016611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.349 [2024-11-19 09:49:13.016763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.349 [2024-11-19 09:49:13.016913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.349 [2024-11-19 09:49:13.016919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.349 [2024-11-19 09:49:13.016925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.349 [2024-11-19 09:49:13.016929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.349 [2024-11-19 09:49:13.028825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.349 [2024-11-19 09:49:13.029287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.349 [2024-11-19 09:49:13.029301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.349 [2024-11-19 09:49:13.029306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.349 [2024-11-19 09:49:13.029457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.349 [2024-11-19 09:49:13.029608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.349 [2024-11-19 09:49:13.029614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.349 [2024-11-19 09:49:13.029619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.349 [2024-11-19 09:49:13.029624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.349 [2024-11-19 09:49:13.041496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.349 [2024-11-19 09:49:13.042077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.349 [2024-11-19 09:49:13.042107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.349 [2024-11-19 09:49:13.042116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.349 [2024-11-19 09:49:13.042290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.349 [2024-11-19 09:49:13.042444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.349 [2024-11-19 09:49:13.042450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.349 [2024-11-19 09:49:13.042456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.349 [2024-11-19 09:49:13.042462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.349 [2024-11-19 09:49:13.054224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.349 [2024-11-19 09:49:13.054801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.349 [2024-11-19 09:49:13.054831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.349 [2024-11-19 09:49:13.054840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.349 [2024-11-19 09:49:13.055006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.349 [2024-11-19 09:49:13.055167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.349 [2024-11-19 09:49:13.055174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.349 [2024-11-19 09:49:13.055179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.349 [2024-11-19 09:49:13.055185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.349 [2024-11-19 09:49:13.066919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.349 [2024-11-19 09:49:13.067482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.349 [2024-11-19 09:49:13.067512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.349 [2024-11-19 09:49:13.067521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.349 [2024-11-19 09:49:13.067687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.349 [2024-11-19 09:49:13.067841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.349 [2024-11-19 09:49:13.067847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.349 [2024-11-19 09:49:13.067852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.350 [2024-11-19 09:49:13.067858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.350 [2024-11-19 09:49:13.079598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.350 [2024-11-19 09:49:13.080185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.350 [2024-11-19 09:49:13.080215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.350 [2024-11-19 09:49:13.080223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.350 [2024-11-19 09:49:13.080392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.350 [2024-11-19 09:49:13.080545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.350 [2024-11-19 09:49:13.080551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.350 [2024-11-19 09:49:13.080557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.350 [2024-11-19 09:49:13.080563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.612 [2024-11-19 09:49:13.092307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.612 [2024-11-19 09:49:13.092877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.612 [2024-11-19 09:49:13.092907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.612 [2024-11-19 09:49:13.092919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.612 [2024-11-19 09:49:13.093085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.612 [2024-11-19 09:49:13.093247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.612 [2024-11-19 09:49:13.093255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.613 [2024-11-19 09:49:13.093261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.613 [2024-11-19 09:49:13.093267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.613 [2024-11-19 09:49:13.105005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.613 [2024-11-19 09:49:13.105558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.613 [2024-11-19 09:49:13.105588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.613 [2024-11-19 09:49:13.105597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.613 [2024-11-19 09:49:13.105763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.613 [2024-11-19 09:49:13.105917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.613 [2024-11-19 09:49:13.105923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.613 [2024-11-19 09:49:13.105929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.613 [2024-11-19 09:49:13.105934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.613 [2024-11-19 09:49:13.117679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.613 [2024-11-19 09:49:13.118259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.613 [2024-11-19 09:49:13.118289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.613 [2024-11-19 09:49:13.118298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.613 [2024-11-19 09:49:13.118467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.613 [2024-11-19 09:49:13.118621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.613 [2024-11-19 09:49:13.118627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.613 [2024-11-19 09:49:13.118633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.613 [2024-11-19 09:49:13.118639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.613 [2024-11-19 09:49:13.130398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.613 [2024-11-19 09:49:13.130856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.613 [2024-11-19 09:49:13.130885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.613 [2024-11-19 09:49:13.130893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.613 [2024-11-19 09:49:13.131060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.613 [2024-11-19 09:49:13.131223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.613 [2024-11-19 09:49:13.131230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.613 [2024-11-19 09:49:13.131236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.613 [2024-11-19 09:49:13.131241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.613 [2024-11-19 09:49:13.143121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.613 [2024-11-19 09:49:13.143712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.613 [2024-11-19 09:49:13.143742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.613 [2024-11-19 09:49:13.143751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.613 [2024-11-19 09:49:13.143917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.613 [2024-11-19 09:49:13.144071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.613 [2024-11-19 09:49:13.144078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.613 [2024-11-19 09:49:13.144083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.613 [2024-11-19 09:49:13.144089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.613 [2024-11-19 09:49:13.155835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.613 [2024-11-19 09:49:13.156394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.613 [2024-11-19 09:49:13.156424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.613 [2024-11-19 09:49:13.156432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.613 [2024-11-19 09:49:13.156598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.613 [2024-11-19 09:49:13.156752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.613 [2024-11-19 09:49:13.156758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.613 [2024-11-19 09:49:13.156764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.613 [2024-11-19 09:49:13.156770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.613 [2024-11-19 09:49:13.168512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.613 [2024-11-19 09:49:13.169086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.613 [2024-11-19 09:49:13.169116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.613 [2024-11-19 09:49:13.169125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.613 [2024-11-19 09:49:13.169301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.613 [2024-11-19 09:49:13.169455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.613 [2024-11-19 09:49:13.169461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.613 [2024-11-19 09:49:13.169470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.613 [2024-11-19 09:49:13.169476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.613 [2024-11-19 09:49:13.181228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.613 [2024-11-19 09:49:13.181810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.613 [2024-11-19 09:49:13.181840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.613 [2024-11-19 09:49:13.181849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.613 [2024-11-19 09:49:13.182016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.613 [2024-11-19 09:49:13.182176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.613 [2024-11-19 09:49:13.182183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.613 [2024-11-19 09:49:13.182188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.613 [2024-11-19 09:49:13.182195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.613 [2024-11-19 09:49:13.193935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.613 [2024-11-19 09:49:13.194537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.613 [2024-11-19 09:49:13.194568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.613 [2024-11-19 09:49:13.194577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.613 [2024-11-19 09:49:13.194743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.613 [2024-11-19 09:49:13.194897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.613 [2024-11-19 09:49:13.194903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.613 [2024-11-19 09:49:13.194909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.613 [2024-11-19 09:49:13.194914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.613 [2024-11-19 09:49:13.206664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.613 [2024-11-19 09:49:13.207206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.613 [2024-11-19 09:49:13.207236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.613 [2024-11-19 09:49:13.207245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.613 [2024-11-19 09:49:13.207413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.613 [2024-11-19 09:49:13.207567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.613 [2024-11-19 09:49:13.207574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.613 [2024-11-19 09:49:13.207580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.613 [2024-11-19 09:49:13.207585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.613 [2024-11-19 09:49:13.219342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.613 [2024-11-19 09:49:13.219920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.613 [2024-11-19 09:49:13.219950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.613 [2024-11-19 09:49:13.219958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.614 [2024-11-19 09:49:13.220125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.614 [2024-11-19 09:49:13.220283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.614 [2024-11-19 09:49:13.220290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.614 [2024-11-19 09:49:13.220296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.614 [2024-11-19 09:49:13.220302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.614 [2024-11-19 09:49:13.232065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.614 [2024-11-19 09:49:13.232546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.614 [2024-11-19 09:49:13.232562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.614 [2024-11-19 09:49:13.232567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.614 [2024-11-19 09:49:13.232718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.614 [2024-11-19 09:49:13.232868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.614 [2024-11-19 09:49:13.232874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.614 [2024-11-19 09:49:13.232879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.614 [2024-11-19 09:49:13.232884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.614 [2024-11-19 09:49:13.244775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.614 [2024-11-19 09:49:13.245210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.614 [2024-11-19 09:49:13.245224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.614 [2024-11-19 09:49:13.245230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.614 [2024-11-19 09:49:13.245380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.614 [2024-11-19 09:49:13.245532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.614 [2024-11-19 09:49:13.245537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.614 [2024-11-19 09:49:13.245543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.614 [2024-11-19 09:49:13.245548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.614 [2024-11-19 09:49:13.257442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.614 [2024-11-19 09:49:13.258011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.614 [2024-11-19 09:49:13.258041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.614 [2024-11-19 09:49:13.258054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.614 [2024-11-19 09:49:13.258227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.614 [2024-11-19 09:49:13.258382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.614 [2024-11-19 09:49:13.258389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.614 [2024-11-19 09:49:13.258394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.614 [2024-11-19 09:49:13.258402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.614 [2024-11-19 09:49:13.270150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.614 [2024-11-19 09:49:13.270718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.614 [2024-11-19 09:49:13.270749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.614 [2024-11-19 09:49:13.270757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.614 [2024-11-19 09:49:13.270926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.614 [2024-11-19 09:49:13.271080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.614 [2024-11-19 09:49:13.271086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.614 [2024-11-19 09:49:13.271092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.614 [2024-11-19 09:49:13.271098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.614 [2024-11-19 09:49:13.282879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.614 [2024-11-19 09:49:13.283483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.614 [2024-11-19 09:49:13.283513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.614 [2024-11-19 09:49:13.283522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.614 [2024-11-19 09:49:13.283687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.614 [2024-11-19 09:49:13.283841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.614 [2024-11-19 09:49:13.283847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.614 [2024-11-19 09:49:13.283853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.614 [2024-11-19 09:49:13.283859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.614 [2024-11-19 09:49:13.295617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.614 [2024-11-19 09:49:13.296203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.614 [2024-11-19 09:49:13.296233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.614 [2024-11-19 09:49:13.296242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.614 [2024-11-19 09:49:13.296411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.614 [2024-11-19 09:49:13.296569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.614 [2024-11-19 09:49:13.296575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.614 [2024-11-19 09:49:13.296581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.614 [2024-11-19 09:49:13.296586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.614 [2024-11-19 09:49:13.308378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.614 [2024-11-19 09:49:13.308957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.614 [2024-11-19 09:49:13.308987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.614 [2024-11-19 09:49:13.308995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.614 [2024-11-19 09:49:13.309168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.614 [2024-11-19 09:49:13.309323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.614 [2024-11-19 09:49:13.309330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.614 [2024-11-19 09:49:13.309336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.614 [2024-11-19 09:49:13.309342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.614 [2024-11-19 09:49:13.321087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.614 [2024-11-19 09:49:13.321697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.614 [2024-11-19 09:49:13.321728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.614 [2024-11-19 09:49:13.321736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.614 [2024-11-19 09:49:13.321903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.614 [2024-11-19 09:49:13.322057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.614 [2024-11-19 09:49:13.322063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.614 [2024-11-19 09:49:13.322069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.614 [2024-11-19 09:49:13.322074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.614 [2024-11-19 09:49:13.333837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.614 [2024-11-19 09:49:13.334446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.614 [2024-11-19 09:49:13.334476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.614 [2024-11-19 09:49:13.334485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.614 [2024-11-19 09:49:13.334651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.614 [2024-11-19 09:49:13.334804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.614 [2024-11-19 09:49:13.334811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.614 [2024-11-19 09:49:13.334821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.614 [2024-11-19 09:49:13.334827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.614 [2024-11-19 09:49:13.346575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.615 [2024-11-19 09:49:13.347156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.615 [2024-11-19 09:49:13.347192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.615 [2024-11-19 09:49:13.347200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.615 [2024-11-19 09:49:13.347367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.615 [2024-11-19 09:49:13.347521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.615 [2024-11-19 09:49:13.347527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.615 [2024-11-19 09:49:13.347533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.615 [2024-11-19 09:49:13.347539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.878 [2024-11-19 09:49:13.359290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.878 [2024-11-19 09:49:13.359823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.878 [2024-11-19 09:49:13.359853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.878 [2024-11-19 09:49:13.359862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.878 [2024-11-19 09:49:13.360029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.878 [2024-11-19 09:49:13.360189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.878 [2024-11-19 09:49:13.360196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.878 [2024-11-19 09:49:13.360202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.878 [2024-11-19 09:49:13.360208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.878 [2024-11-19 09:49:13.371947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.878 [2024-11-19 09:49:13.372443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.878 [2024-11-19 09:49:13.372458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.878 [2024-11-19 09:49:13.372464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.878 [2024-11-19 09:49:13.372615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.878 [2024-11-19 09:49:13.372766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.878 [2024-11-19 09:49:13.372771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.878 [2024-11-19 09:49:13.372777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.878 [2024-11-19 09:49:13.372781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.878 [2024-11-19 09:49:13.384666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.878 [2024-11-19 09:49:13.385117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.878 [2024-11-19 09:49:13.385130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.878 [2024-11-19 09:49:13.385135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.878 [2024-11-19 09:49:13.385291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.878 [2024-11-19 09:49:13.385442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.878 [2024-11-19 09:49:13.385448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.878 [2024-11-19 09:49:13.385453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.878 [2024-11-19 09:49:13.385458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.878 [2024-11-19 09:49:13.397338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.878 [2024-11-19 09:49:13.397678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.878 [2024-11-19 09:49:13.397690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.878 [2024-11-19 09:49:13.397696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.878 [2024-11-19 09:49:13.397846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.878 [2024-11-19 09:49:13.397996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.878 [2024-11-19 09:49:13.398002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.878 [2024-11-19 09:49:13.398007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.878 [2024-11-19 09:49:13.398011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.878 [2024-11-19 09:49:13.410039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.878 [2024-11-19 09:49:13.410600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.878 [2024-11-19 09:49:13.410630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.878 [2024-11-19 09:49:13.410639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.878 [2024-11-19 09:49:13.410805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.878 [2024-11-19 09:49:13.410959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.878 [2024-11-19 09:49:13.410966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.878 [2024-11-19 09:49:13.410971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.878 [2024-11-19 09:49:13.410977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.878 7100.50 IOPS, 27.74 MiB/s [2024-11-19T08:49:13.626Z] [2024-11-19 09:49:13.423873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.878 [2024-11-19 09:49:13.424278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.878 [2024-11-19 09:49:13.424309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.878 [2024-11-19 09:49:13.424322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.878 [2024-11-19 09:49:13.424491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.878 [2024-11-19 09:49:13.424645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.878 [2024-11-19 09:49:13.424651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.878 [2024-11-19 09:49:13.424656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.878 [2024-11-19 09:49:13.424662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.878 [2024-11-19 09:49:13.436607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.878 [2024-11-19 09:49:13.437214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.878 [2024-11-19 09:49:13.437245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.878 [2024-11-19 09:49:13.437254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.878 [2024-11-19 09:49:13.437423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.878 [2024-11-19 09:49:13.437578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.878 [2024-11-19 09:49:13.437584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.878 [2024-11-19 09:49:13.437590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.878 [2024-11-19 09:49:13.437595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.878 [2024-11-19 09:49:13.449352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.878 [2024-11-19 09:49:13.449838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.878 [2024-11-19 09:49:13.449853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.878 [2024-11-19 09:49:13.449859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.878 [2024-11-19 09:49:13.450010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.878 [2024-11-19 09:49:13.450167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.878 [2024-11-19 09:49:13.450173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.878 [2024-11-19 09:49:13.450179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.879 [2024-11-19 09:49:13.450183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.879 [2024-11-19 09:49:13.462066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.879 [2024-11-19 09:49:13.462565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.879 [2024-11-19 09:49:13.462595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.879 [2024-11-19 09:49:13.462604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.879 [2024-11-19 09:49:13.462770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.879 [2024-11-19 09:49:13.462928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.879 [2024-11-19 09:49:13.462934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.879 [2024-11-19 09:49:13.462940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.879 [2024-11-19 09:49:13.462946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.879 [2024-11-19 09:49:13.474691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.879 [2024-11-19 09:49:13.475187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.879 [2024-11-19 09:49:13.475203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.879 [2024-11-19 09:49:13.475208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.879 [2024-11-19 09:49:13.475359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.879 [2024-11-19 09:49:13.475510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.879 [2024-11-19 09:49:13.475516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.879 [2024-11-19 09:49:13.475521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.879 [2024-11-19 09:49:13.475526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.879 [2024-11-19 09:49:13.487419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.879 [2024-11-19 09:49:13.487996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.879 [2024-11-19 09:49:13.488026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.879 [2024-11-19 09:49:13.488034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.879 [2024-11-19 09:49:13.488206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.879 [2024-11-19 09:49:13.488361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.879 [2024-11-19 09:49:13.488367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.879 [2024-11-19 09:49:13.488373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.879 [2024-11-19 09:49:13.488379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.879 [2024-11-19 09:49:13.500183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.879 [2024-11-19 09:49:13.500696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.879 [2024-11-19 09:49:13.500711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.879 [2024-11-19 09:49:13.500717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.879 [2024-11-19 09:49:13.500868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.879 [2024-11-19 09:49:13.501019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.879 [2024-11-19 09:49:13.501025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.879 [2024-11-19 09:49:13.501034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.879 [2024-11-19 09:49:13.501039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.879 [2024-11-19 09:49:13.512793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.879 [2024-11-19 09:49:13.513283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.879 [2024-11-19 09:49:13.513313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.879 [2024-11-19 09:49:13.513321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.879 [2024-11-19 09:49:13.513490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.879 [2024-11-19 09:49:13.513644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.879 [2024-11-19 09:49:13.513650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.879 [2024-11-19 09:49:13.513656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.879 [2024-11-19 09:49:13.513662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.879 [2024-11-19 09:49:13.525424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.879 [2024-11-19 09:49:13.525964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.879 [2024-11-19 09:49:13.525995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.879 [2024-11-19 09:49:13.526003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.879 [2024-11-19 09:49:13.526176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.879 [2024-11-19 09:49:13.526331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.879 [2024-11-19 09:49:13.526337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.879 [2024-11-19 09:49:13.526343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.879 [2024-11-19 09:49:13.526349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.879 [2024-11-19 09:49:13.538116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.879 [2024-11-19 09:49:13.538681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.879 [2024-11-19 09:49:13.538711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.879 [2024-11-19 09:49:13.538720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.879 [2024-11-19 09:49:13.538886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.879 [2024-11-19 09:49:13.539040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.879 [2024-11-19 09:49:13.539046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.879 [2024-11-19 09:49:13.539052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.879 [2024-11-19 09:49:13.539057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.879 [2024-11-19 09:49:13.550810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.879 [2024-11-19 09:49:13.551286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.879 [2024-11-19 09:49:13.551301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.879 [2024-11-19 09:49:13.551307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.879 [2024-11-19 09:49:13.551458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.879 [2024-11-19 09:49:13.551609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.879 [2024-11-19 09:49:13.551614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.879 [2024-11-19 09:49:13.551619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.879 [2024-11-19 09:49:13.551624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.879 [2024-11-19 09:49:13.563509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.879 [2024-11-19 09:49:13.563961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.879 [2024-11-19 09:49:13.563974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.879 [2024-11-19 09:49:13.563979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.879 [2024-11-19 09:49:13.564129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.879 [2024-11-19 09:49:13.564286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.879 [2024-11-19 09:49:13.564292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.879 [2024-11-19 09:49:13.564297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.879 [2024-11-19 09:49:13.564301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.879 [2024-11-19 09:49:13.576191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.879 [2024-11-19 09:49:13.576540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.879 [2024-11-19 09:49:13.576554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.879 [2024-11-19 09:49:13.576559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.879 [2024-11-19 09:49:13.576710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.879 [2024-11-19 09:49:13.576861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.879 [2024-11-19 09:49:13.576867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.880 [2024-11-19 09:49:13.576872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.880 [2024-11-19 09:49:13.576877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.880 [2024-11-19 09:49:13.588910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.880 [2024-11-19 09:49:13.589375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.880 [2024-11-19 09:49:13.589388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.880 [2024-11-19 09:49:13.589397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.880 [2024-11-19 09:49:13.589548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.880 [2024-11-19 09:49:13.589698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.880 [2024-11-19 09:49:13.589704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.880 [2024-11-19 09:49:13.589709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.880 [2024-11-19 09:49:13.589713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.880 [2024-11-19 09:49:13.601599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.880 [2024-11-19 09:49:13.602169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.880 [2024-11-19 09:49:13.602200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.880 [2024-11-19 09:49:13.602208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.880 [2024-11-19 09:49:13.602377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.880 [2024-11-19 09:49:13.602531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.880 [2024-11-19 09:49:13.602537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.880 [2024-11-19 09:49:13.602543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.880 [2024-11-19 09:49:13.602549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:26.880 [2024-11-19 09:49:13.614298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:26.880 [2024-11-19 09:49:13.614874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.880 [2024-11-19 09:49:13.614904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:26.880 [2024-11-19 09:49:13.614913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:26.880 [2024-11-19 09:49:13.615079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:26.880 [2024-11-19 09:49:13.615240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:26.880 [2024-11-19 09:49:13.615248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:26.880 [2024-11-19 09:49:13.615253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:26.880 [2024-11-19 09:49:13.615259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.143 [2024-11-19 09:49:13.627011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.143 [2024-11-19 09:49:13.627628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.143 [2024-11-19 09:49:13.627659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.143 [2024-11-19 09:49:13.627667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.143 [2024-11-19 09:49:13.627833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.143 [2024-11-19 09:49:13.627995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.143 [2024-11-19 09:49:13.628001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.143 [2024-11-19 09:49:13.628007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.143 [2024-11-19 09:49:13.628012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.143 [2024-11-19 09:49:13.639642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.143 [2024-11-19 09:49:13.640217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.143 [2024-11-19 09:49:13.640248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.143 [2024-11-19 09:49:13.640257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.143 [2024-11-19 09:49:13.640425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.143 [2024-11-19 09:49:13.640579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.143 [2024-11-19 09:49:13.640585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.143 [2024-11-19 09:49:13.640592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.143 [2024-11-19 09:49:13.640598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.143 [2024-11-19 09:49:13.652353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.143 [2024-11-19 09:49:13.652740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.143 [2024-11-19 09:49:13.652755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.143 [2024-11-19 09:49:13.652761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.143 [2024-11-19 09:49:13.652912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.143 [2024-11-19 09:49:13.653062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.143 [2024-11-19 09:49:13.653069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.143 [2024-11-19 09:49:13.653074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.143 [2024-11-19 09:49:13.653079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.143 [2024-11-19 09:49:13.664967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.143 [2024-11-19 09:49:13.665443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.143 [2024-11-19 09:49:13.665457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.143 [2024-11-19 09:49:13.665462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.143 [2024-11-19 09:49:13.665613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.143 [2024-11-19 09:49:13.665763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.143 [2024-11-19 09:49:13.665769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.143 [2024-11-19 09:49:13.665778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.143 [2024-11-19 09:49:13.665782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.143 [2024-11-19 09:49:13.677674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.143 [2024-11-19 09:49:13.678168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.143 [2024-11-19 09:49:13.678181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.143 [2024-11-19 09:49:13.678186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.143 [2024-11-19 09:49:13.678337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.143 [2024-11-19 09:49:13.678487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.143 [2024-11-19 09:49:13.678493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.143 [2024-11-19 09:49:13.678498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.143 [2024-11-19 09:49:13.678502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.143 [2024-11-19 09:49:13.690462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.143 [2024-11-19 09:49:13.691027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.144 [2024-11-19 09:49:13.691057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.144 [2024-11-19 09:49:13.691066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.144 [2024-11-19 09:49:13.691238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.144 [2024-11-19 09:49:13.691393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.144 [2024-11-19 09:49:13.691399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.144 [2024-11-19 09:49:13.691406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.144 [2024-11-19 09:49:13.691411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.144 [2024-11-19 09:49:13.703156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.144 [2024-11-19 09:49:13.703651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.144 [2024-11-19 09:49:13.703666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.144 [2024-11-19 09:49:13.703671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.144 [2024-11-19 09:49:13.703822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.144 [2024-11-19 09:49:13.703972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.144 [2024-11-19 09:49:13.703979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.144 [2024-11-19 09:49:13.703984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.144 [2024-11-19 09:49:13.703989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.144 [2024-11-19 09:49:13.715892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.144 [2024-11-19 09:49:13.716380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.144 [2024-11-19 09:49:13.716411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.144 [2024-11-19 09:49:13.716420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.144 [2024-11-19 09:49:13.716586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.144 [2024-11-19 09:49:13.716740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.144 [2024-11-19 09:49:13.716746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.144 [2024-11-19 09:49:13.716752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.144 [2024-11-19 09:49:13.716758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.144 [2024-11-19 09:49:13.728508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.144 [2024-11-19 09:49:13.729000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.144 [2024-11-19 09:49:13.729016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.144 [2024-11-19 09:49:13.729021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.144 [2024-11-19 09:49:13.729178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.144 [2024-11-19 09:49:13.729337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.144 [2024-11-19 09:49:13.729343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.144 [2024-11-19 09:49:13.729349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.144 [2024-11-19 09:49:13.729353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.144 [2024-11-19 09:49:13.741118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.144 [2024-11-19 09:49:13.741559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.144 [2024-11-19 09:49:13.741572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.144 [2024-11-19 09:49:13.741577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.144 [2024-11-19 09:49:13.741728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.144 [2024-11-19 09:49:13.741878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.144 [2024-11-19 09:49:13.741884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.144 [2024-11-19 09:49:13.741889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.144 [2024-11-19 09:49:13.741893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.144 [2024-11-19 09:49:13.753789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.144 [2024-11-19 09:49:13.754211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.144 [2024-11-19 09:49:13.754224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.144 [2024-11-19 09:49:13.754233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.144 [2024-11-19 09:49:13.754383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.144 [2024-11-19 09:49:13.754534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.144 [2024-11-19 09:49:13.754540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.144 [2024-11-19 09:49:13.754545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.144 [2024-11-19 09:49:13.754550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.144 [2024-11-19 09:49:13.766447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.144 [2024-11-19 09:49:13.766904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.144 [2024-11-19 09:49:13.766916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.144 [2024-11-19 09:49:13.766921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.144 [2024-11-19 09:49:13.767071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.144 [2024-11-19 09:49:13.767227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.144 [2024-11-19 09:49:13.767233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.144 [2024-11-19 09:49:13.767239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.144 [2024-11-19 09:49:13.767243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.144 [2024-11-19 09:49:13.779134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.144 [2024-11-19 09:49:13.779603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.144 [2024-11-19 09:49:13.779615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.144 [2024-11-19 09:49:13.779621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.144 [2024-11-19 09:49:13.779771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.144 [2024-11-19 09:49:13.779921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.144 [2024-11-19 09:49:13.779927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.144 [2024-11-19 09:49:13.779932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.144 [2024-11-19 09:49:13.779937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.144 [2024-11-19 09:49:13.791833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.144 [2024-11-19 09:49:13.792295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.144 [2024-11-19 09:49:13.792308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.144 [2024-11-19 09:49:13.792313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.144 [2024-11-19 09:49:13.792464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.144 [2024-11-19 09:49:13.792620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.144 [2024-11-19 09:49:13.792625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.144 [2024-11-19 09:49:13.792630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.144 [2024-11-19 09:49:13.792635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.145 [2024-11-19 09:49:13.804533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.145 [2024-11-19 09:49:13.805015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.145 [2024-11-19 09:49:13.805028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.145 [2024-11-19 09:49:13.805033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.145 [2024-11-19 09:49:13.805188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.145 [2024-11-19 09:49:13.805339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.145 [2024-11-19 09:49:13.805345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.145 [2024-11-19 09:49:13.805349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.145 [2024-11-19 09:49:13.805354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.145 [2024-11-19 09:49:13.817248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.145 [2024-11-19 09:49:13.817694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.145 [2024-11-19 09:49:13.817706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.145 [2024-11-19 09:49:13.817711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.145 [2024-11-19 09:49:13.817862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.145 [2024-11-19 09:49:13.818012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.145 [2024-11-19 09:49:13.818018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.145 [2024-11-19 09:49:13.818024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.145 [2024-11-19 09:49:13.818028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.145 [2024-11-19 09:49:13.829928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.145 [2024-11-19 09:49:13.830381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.145 [2024-11-19 09:49:13.830394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.145 [2024-11-19 09:49:13.830399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.145 [2024-11-19 09:49:13.830550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.145 [2024-11-19 09:49:13.830701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.145 [2024-11-19 09:49:13.830707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.145 [2024-11-19 09:49:13.830715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.145 [2024-11-19 09:49:13.830719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.145 [2024-11-19 09:49:13.842626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.145 [2024-11-19 09:49:13.843109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.145 [2024-11-19 09:49:13.843121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.145 [2024-11-19 09:49:13.843126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.145 [2024-11-19 09:49:13.843281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.145 [2024-11-19 09:49:13.843432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.145 [2024-11-19 09:49:13.843439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.145 [2024-11-19 09:49:13.843444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.145 [2024-11-19 09:49:13.843448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.145 [2024-11-19 09:49:13.855342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.145 [2024-11-19 09:49:13.855826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.145 [2024-11-19 09:49:13.855839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.145 [2024-11-19 09:49:13.855844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.145 [2024-11-19 09:49:13.855994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.145 [2024-11-19 09:49:13.856145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.145 [2024-11-19 09:49:13.856151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.145 [2024-11-19 09:49:13.856156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.145 [2024-11-19 09:49:13.856165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.145 [2024-11-19 09:49:13.868058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.145 [2024-11-19 09:49:13.868489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.145 [2024-11-19 09:49:13.868502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.145 [2024-11-19 09:49:13.868507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.145 [2024-11-19 09:49:13.868657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.145 [2024-11-19 09:49:13.868808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.145 [2024-11-19 09:49:13.868813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.145 [2024-11-19 09:49:13.868818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.145 [2024-11-19 09:49:13.868823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.145 [2024-11-19 09:49:13.880723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.145 [2024-11-19 09:49:13.881268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.145 [2024-11-19 09:49:13.881298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.145 [2024-11-19 09:49:13.881307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.145 [2024-11-19 09:49:13.881476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.145 [2024-11-19 09:49:13.881631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.145 [2024-11-19 09:49:13.881637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.145 [2024-11-19 09:49:13.881644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.145 [2024-11-19 09:49:13.881649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.407 [2024-11-19 09:49:13.893402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.407 [2024-11-19 09:49:13.893898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.407 [2024-11-19 09:49:13.893912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.407 [2024-11-19 09:49:13.893918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.407 [2024-11-19 09:49:13.894069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.407 [2024-11-19 09:49:13.894225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.408 [2024-11-19 09:49:13.894231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.408 [2024-11-19 09:49:13.894237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.408 [2024-11-19 09:49:13.894242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.408 [2024-11-19 09:49:13.906128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.408 [2024-11-19 09:49:13.906577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.408 [2024-11-19 09:49:13.906590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.408 [2024-11-19 09:49:13.906596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.408 [2024-11-19 09:49:13.906746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.408 [2024-11-19 09:49:13.906897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.408 [2024-11-19 09:49:13.906903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.408 [2024-11-19 09:49:13.906908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.408 [2024-11-19 09:49:13.906913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.408 [2024-11-19 09:49:13.918809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.408 [2024-11-19 09:49:13.919245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.408 [2024-11-19 09:49:13.919258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.408 [2024-11-19 09:49:13.919268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.408 [2024-11-19 09:49:13.919418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.408 [2024-11-19 09:49:13.919569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.408 [2024-11-19 09:49:13.919575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.408 [2024-11-19 09:49:13.919580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.408 [2024-11-19 09:49:13.919584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.408 [2024-11-19 09:49:13.931497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.408 [2024-11-19 09:49:13.931987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.408 [2024-11-19 09:49:13.931999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.408 [2024-11-19 09:49:13.932005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.408 [2024-11-19 09:49:13.932155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.408 [2024-11-19 09:49:13.932312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.408 [2024-11-19 09:49:13.932318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.408 [2024-11-19 09:49:13.932323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.408 [2024-11-19 09:49:13.932328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.408 [2024-11-19 09:49:13.944222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.408 [2024-11-19 09:49:13.944788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.408 [2024-11-19 09:49:13.944818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.408 [2024-11-19 09:49:13.944826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.408 [2024-11-19 09:49:13.944992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.408 [2024-11-19 09:49:13.945147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.408 [2024-11-19 09:49:13.945153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.408 [2024-11-19 09:49:13.945166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.408 [2024-11-19 09:49:13.945172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.408 [2024-11-19 09:49:13.956918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.408 [2024-11-19 09:49:13.957493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.408 [2024-11-19 09:49:13.957524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.408 [2024-11-19 09:49:13.957532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.408 [2024-11-19 09:49:13.957699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.408 [2024-11-19 09:49:13.957856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.408 [2024-11-19 09:49:13.957863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.408 [2024-11-19 09:49:13.957869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.408 [2024-11-19 09:49:13.957874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.408 [2024-11-19 09:49:13.969624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.408 [2024-11-19 09:49:13.970201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.408 [2024-11-19 09:49:13.970231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.408 [2024-11-19 09:49:13.970240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.408 [2024-11-19 09:49:13.970406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.408 [2024-11-19 09:49:13.970560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.408 [2024-11-19 09:49:13.970566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.408 [2024-11-19 09:49:13.970572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.408 [2024-11-19 09:49:13.970577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.408 [2024-11-19 09:49:13.982333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.408 [2024-11-19 09:49:13.982919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.408 [2024-11-19 09:49:13.982949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.408 [2024-11-19 09:49:13.982958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.408 [2024-11-19 09:49:13.983124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.408 [2024-11-19 09:49:13.983285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.408 [2024-11-19 09:49:13.983292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.408 [2024-11-19 09:49:13.983298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.408 [2024-11-19 09:49:13.983303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.408 [2024-11-19 09:49:13.995042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.408 [2024-11-19 09:49:13.995604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.408 [2024-11-19 09:49:13.995634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.408 [2024-11-19 09:49:13.995643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.408 [2024-11-19 09:49:13.995810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.408 [2024-11-19 09:49:13.995963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.408 [2024-11-19 09:49:13.995971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.408 [2024-11-19 09:49:13.995980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.408 [2024-11-19 09:49:13.995986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.408 [2024-11-19 09:49:14.007732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.408 [2024-11-19 09:49:14.008180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.408 [2024-11-19 09:49:14.008211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.408 [2024-11-19 09:49:14.008221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.408 [2024-11-19 09:49:14.008389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.408 [2024-11-19 09:49:14.008543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.408 [2024-11-19 09:49:14.008550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.408 [2024-11-19 09:49:14.008556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.409 [2024-11-19 09:49:14.008562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.409 [2024-11-19 09:49:14.020459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.409 [2024-11-19 09:49:14.021043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.409 [2024-11-19 09:49:14.021073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.409 [2024-11-19 09:49:14.021083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.409 [2024-11-19 09:49:14.021254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.409 [2024-11-19 09:49:14.021409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.409 [2024-11-19 09:49:14.021415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.409 [2024-11-19 09:49:14.021421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.409 [2024-11-19 09:49:14.021427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.409 [2024-11-19 09:49:14.033175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.409 [2024-11-19 09:49:14.033644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.409 [2024-11-19 09:49:14.033659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.409 [2024-11-19 09:49:14.033665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.409 [2024-11-19 09:49:14.033816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.409 [2024-11-19 09:49:14.033967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.409 [2024-11-19 09:49:14.033972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.409 [2024-11-19 09:49:14.033977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.409 [2024-11-19 09:49:14.033982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.409 [2024-11-19 09:49:14.045866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.409 [2024-11-19 09:49:14.046409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.409 [2024-11-19 09:49:14.046439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.409 [2024-11-19 09:49:14.046448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.409 [2024-11-19 09:49:14.046614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.409 [2024-11-19 09:49:14.046768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.409 [2024-11-19 09:49:14.046774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.409 [2024-11-19 09:49:14.046780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.409 [2024-11-19 09:49:14.046786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.409 [2024-11-19 09:49:14.058602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.409 [2024-11-19 09:49:14.059199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.409 [2024-11-19 09:49:14.059229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.409 [2024-11-19 09:49:14.059238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.409 [2024-11-19 09:49:14.059407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.409 [2024-11-19 09:49:14.059561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.409 [2024-11-19 09:49:14.059567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.409 [2024-11-19 09:49:14.059573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.409 [2024-11-19 09:49:14.059579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.409 [2024-11-19 09:49:14.071327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.409 [2024-11-19 09:49:14.071821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.409 [2024-11-19 09:49:14.071835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.409 [2024-11-19 09:49:14.071841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.409 [2024-11-19 09:49:14.071992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.409 [2024-11-19 09:49:14.072142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.409 [2024-11-19 09:49:14.072148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.409 [2024-11-19 09:49:14.072153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.409 [2024-11-19 09:49:14.072164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.409 [2024-11-19 09:49:14.084037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.409 [2024-11-19 09:49:14.084592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.409 [2024-11-19 09:49:14.084623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.409 [2024-11-19 09:49:14.084635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.409 [2024-11-19 09:49:14.084801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.409 [2024-11-19 09:49:14.084954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.409 [2024-11-19 09:49:14.084961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.409 [2024-11-19 09:49:14.084966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.409 [2024-11-19 09:49:14.084972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.409 [2024-11-19 09:49:14.096715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.409 [2024-11-19 09:49:14.097350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.409 [2024-11-19 09:49:14.097380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.409 [2024-11-19 09:49:14.097389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.409 [2024-11-19 09:49:14.097557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.409 [2024-11-19 09:49:14.097711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.409 [2024-11-19 09:49:14.097717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.409 [2024-11-19 09:49:14.097723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.409 [2024-11-19 09:49:14.097729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.409 [2024-11-19 09:49:14.109332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.409 [2024-11-19 09:49:14.109908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.409 [2024-11-19 09:49:14.109938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.409 [2024-11-19 09:49:14.109946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.409 [2024-11-19 09:49:14.110113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.409 [2024-11-19 09:49:14.110275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.409 [2024-11-19 09:49:14.110282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.409 [2024-11-19 09:49:14.110288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.409 [2024-11-19 09:49:14.110293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.409 [2024-11-19 09:49:14.122030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.409 [2024-11-19 09:49:14.122603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.409 [2024-11-19 09:49:14.122633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.409 [2024-11-19 09:49:14.122641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.409 [2024-11-19 09:49:14.122807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.409 [2024-11-19 09:49:14.122965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.409 [2024-11-19 09:49:14.122971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.409 [2024-11-19 09:49:14.122977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.409 [2024-11-19 09:49:14.122983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.409 [2024-11-19 09:49:14.134746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.409 [2024-11-19 09:49:14.135322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.409 [2024-11-19 09:49:14.135353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.409 [2024-11-19 09:49:14.135362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.409 [2024-11-19 09:49:14.135528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.409 [2024-11-19 09:49:14.135682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.409 [2024-11-19 09:49:14.135688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.410 [2024-11-19 09:49:14.135694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.410 [2024-11-19 09:49:14.135700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.410 [2024-11-19 09:49:14.147448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.410 [2024-11-19 09:49:14.148031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.410 [2024-11-19 09:49:14.148061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.410 [2024-11-19 09:49:14.148070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.410 [2024-11-19 09:49:14.148245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.410 [2024-11-19 09:49:14.148400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.410 [2024-11-19 09:49:14.148406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.410 [2024-11-19 09:49:14.148412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.410 [2024-11-19 09:49:14.148417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.672 [2024-11-19 09:49:14.160171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.672 [2024-11-19 09:49:14.160741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.672 [2024-11-19 09:49:14.160771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.672 [2024-11-19 09:49:14.160779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.672 [2024-11-19 09:49:14.160946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.672 [2024-11-19 09:49:14.161100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.672 [2024-11-19 09:49:14.161106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.672 [2024-11-19 09:49:14.161115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.672 [2024-11-19 09:49:14.161121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.672 [2024-11-19 09:49:14.172869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.672 [2024-11-19 09:49:14.173461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.672 [2024-11-19 09:49:14.173492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.672 [2024-11-19 09:49:14.173500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.672 [2024-11-19 09:49:14.173666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.672 [2024-11-19 09:49:14.173820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.672 [2024-11-19 09:49:14.173826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.672 [2024-11-19 09:49:14.173832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.672 [2024-11-19 09:49:14.173838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.672 [2024-11-19 09:49:14.185582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.672 [2024-11-19 09:49:14.186050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.672 [2024-11-19 09:49:14.186080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.672 [2024-11-19 09:49:14.186089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.672 [2024-11-19 09:49:14.186262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.672 [2024-11-19 09:49:14.186416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.672 [2024-11-19 09:49:14.186423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.672 [2024-11-19 09:49:14.186429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.672 [2024-11-19 09:49:14.186434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.672 [2024-11-19 09:49:14.198318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.672 [2024-11-19 09:49:14.198822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.672 [2024-11-19 09:49:14.198837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.672 [2024-11-19 09:49:14.198843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.672 [2024-11-19 09:49:14.198994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.672 [2024-11-19 09:49:14.199144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.672 [2024-11-19 09:49:14.199150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.672 [2024-11-19 09:49:14.199155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.672 [2024-11-19 09:49:14.199166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.672 [2024-11-19 09:49:14.211038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.672 [2024-11-19 09:49:14.211505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.672 [2024-11-19 09:49:14.211518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.672 [2024-11-19 09:49:14.211524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.672 [2024-11-19 09:49:14.211674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.672 [2024-11-19 09:49:14.211825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.672 [2024-11-19 09:49:14.211830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.672 [2024-11-19 09:49:14.211835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.672 [2024-11-19 09:49:14.211840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.672 [2024-11-19 09:49:14.223719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.672 [2024-11-19 09:49:14.224204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.673 [2024-11-19 09:49:14.224217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.673 [2024-11-19 09:49:14.224222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.673 [2024-11-19 09:49:14.224373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.673 [2024-11-19 09:49:14.224524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.673 [2024-11-19 09:49:14.224529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.673 [2024-11-19 09:49:14.224534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.673 [2024-11-19 09:49:14.224539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.673 [2024-11-19 09:49:14.236429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.673 [2024-11-19 09:49:14.236976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.673 [2024-11-19 09:49:14.237006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.673 [2024-11-19 09:49:14.237015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.673 [2024-11-19 09:49:14.237188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.673 [2024-11-19 09:49:14.237343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.673 [2024-11-19 09:49:14.237350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.673 [2024-11-19 09:49:14.237356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.673 [2024-11-19 09:49:14.237361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.673 [2024-11-19 09:49:14.249101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.673 [2024-11-19 09:49:14.249674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.673 [2024-11-19 09:49:14.249703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.673 [2024-11-19 09:49:14.249719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.673 [2024-11-19 09:49:14.249886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.673 [2024-11-19 09:49:14.250040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.673 [2024-11-19 09:49:14.250046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.673 [2024-11-19 09:49:14.250052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.673 [2024-11-19 09:49:14.250058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.673 [2024-11-19 09:49:14.261800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.673 [2024-11-19 09:49:14.262409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.673 [2024-11-19 09:49:14.262439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.673 [2024-11-19 09:49:14.262448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.673 [2024-11-19 09:49:14.262617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.673 [2024-11-19 09:49:14.262770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.673 [2024-11-19 09:49:14.262777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.673 [2024-11-19 09:49:14.262782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.673 [2024-11-19 09:49:14.262788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.673 [2024-11-19 09:49:14.274536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.673 [2024-11-19 09:49:14.275128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.673 [2024-11-19 09:49:14.275164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.673 [2024-11-19 09:49:14.275173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.673 [2024-11-19 09:49:14.275340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.673 [2024-11-19 09:49:14.275495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.673 [2024-11-19 09:49:14.275501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.673 [2024-11-19 09:49:14.275507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.673 [2024-11-19 09:49:14.275513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.673 [2024-11-19 09:49:14.287251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.673 [2024-11-19 09:49:14.287702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.673 [2024-11-19 09:49:14.287732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.673 [2024-11-19 09:49:14.287740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.673 [2024-11-19 09:49:14.287906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.673 [2024-11-19 09:49:14.288064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.673 [2024-11-19 09:49:14.288071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.673 [2024-11-19 09:49:14.288076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.673 [2024-11-19 09:49:14.288082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.673 [2024-11-19 09:49:14.299970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.673 [2024-11-19 09:49:14.300571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.673 [2024-11-19 09:49:14.300601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.673 [2024-11-19 09:49:14.300609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.673 [2024-11-19 09:49:14.300775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.673 [2024-11-19 09:49:14.300929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.673 [2024-11-19 09:49:14.300936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.673 [2024-11-19 09:49:14.300942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.673 [2024-11-19 09:49:14.300947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.673 [2024-11-19 09:49:14.312700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.673 [2024-11-19 09:49:14.313260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.673 [2024-11-19 09:49:14.313290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.673 [2024-11-19 09:49:14.313299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.673 [2024-11-19 09:49:14.313467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.673 [2024-11-19 09:49:14.313621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.673 [2024-11-19 09:49:14.313628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.673 [2024-11-19 09:49:14.313633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.673 [2024-11-19 09:49:14.313639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.673 [2024-11-19 09:49:14.325381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.673 [2024-11-19 09:49:14.325885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.673 [2024-11-19 09:49:14.325900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.673 [2024-11-19 09:49:14.325906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.673 [2024-11-19 09:49:14.326056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.673 [2024-11-19 09:49:14.326212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.673 [2024-11-19 09:49:14.326218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.673 [2024-11-19 09:49:14.326228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.673 [2024-11-19 09:49:14.326233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.673 [2024-11-19 09:49:14.338128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.673 [2024-11-19 09:49:14.338576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.673 [2024-11-19 09:49:14.338589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.673 [2024-11-19 09:49:14.338595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.673 [2024-11-19 09:49:14.338745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.673 [2024-11-19 09:49:14.338896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.673 [2024-11-19 09:49:14.338902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.673 [2024-11-19 09:49:14.338907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.673 [2024-11-19 09:49:14.338912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.674 [2024-11-19 09:49:14.350792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.674 [2024-11-19 09:49:14.351282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.674 [2024-11-19 09:49:14.351313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.674 [2024-11-19 09:49:14.351321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.674 [2024-11-19 09:49:14.351490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.674 [2024-11-19 09:49:14.351645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.674 [2024-11-19 09:49:14.351651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.674 [2024-11-19 09:49:14.351657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.674 [2024-11-19 09:49:14.351663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.674 [2024-11-19 09:49:14.363411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.674 [2024-11-19 09:49:14.363954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.674 [2024-11-19 09:49:14.363984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.674 [2024-11-19 09:49:14.363993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.674 [2024-11-19 09:49:14.364167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.674 [2024-11-19 09:49:14.364322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.674 [2024-11-19 09:49:14.364328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.674 [2024-11-19 09:49:14.364334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.674 [2024-11-19 09:49:14.364340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.674 [2024-11-19 09:49:14.376091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.674 [2024-11-19 09:49:14.376705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.674 [2024-11-19 09:49:14.376735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.674 [2024-11-19 09:49:14.376744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.674 [2024-11-19 09:49:14.376910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.674 [2024-11-19 09:49:14.377064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.674 [2024-11-19 09:49:14.377070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.674 [2024-11-19 09:49:14.377075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.674 [2024-11-19 09:49:14.377081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.674 [2024-11-19 09:49:14.388837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.674 [2024-11-19 09:49:14.389405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.674 [2024-11-19 09:49:14.389435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.674 [2024-11-19 09:49:14.389444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.674 [2024-11-19 09:49:14.389610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.674 [2024-11-19 09:49:14.389764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.674 [2024-11-19 09:49:14.389770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.674 [2024-11-19 09:49:14.389776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.674 [2024-11-19 09:49:14.389782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.674 [2024-11-19 09:49:14.401521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.674 [2024-11-19 09:49:14.401963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.674 [2024-11-19 09:49:14.401978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.674 [2024-11-19 09:49:14.401983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.674 [2024-11-19 09:49:14.402134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.674 [2024-11-19 09:49:14.402290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.674 [2024-11-19 09:49:14.402297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.674 [2024-11-19 09:49:14.402302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.674 [2024-11-19 09:49:14.402307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.674 [2024-11-19 09:49:14.414192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.674 [2024-11-19 09:49:14.414738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.674 [2024-11-19 09:49:14.414768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.674 [2024-11-19 09:49:14.414780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.674 [2024-11-19 09:49:14.414947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.674 [2024-11-19 09:49:14.415100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.674 [2024-11-19 09:49:14.415107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.674 [2024-11-19 09:49:14.415112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.674 [2024-11-19 09:49:14.415118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.937 5680.40 IOPS, 22.19 MiB/s [2024-11-19T08:49:14.685Z] [2024-11-19 09:49:14.427729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.937 [2024-11-19 09:49:14.428313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.937 [2024-11-19 09:49:14.428343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.937 [2024-11-19 09:49:14.428351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.937 [2024-11-19 09:49:14.428517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.937 [2024-11-19 09:49:14.428671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.937 [2024-11-19 09:49:14.428678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.937 [2024-11-19 09:49:14.428684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.937 [2024-11-19 09:49:14.428690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.937 [2024-11-19 09:49:14.440450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.937 [2024-11-19 09:49:14.440954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.937 [2024-11-19 09:49:14.440969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.937 [2024-11-19 09:49:14.440975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.937 [2024-11-19 09:49:14.441126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.937 [2024-11-19 09:49:14.441281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.937 [2024-11-19 09:49:14.441288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.937 [2024-11-19 09:49:14.441293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.937 [2024-11-19 09:49:14.441298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.937 [2024-11-19 09:49:14.453174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.937 [2024-11-19 09:49:14.453645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.937 [2024-11-19 09:49:14.453658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.937 [2024-11-19 09:49:14.453664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.937 [2024-11-19 09:49:14.453814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.937 [2024-11-19 09:49:14.453969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.937 [2024-11-19 09:49:14.453975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.937 [2024-11-19 09:49:14.453980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.937 [2024-11-19 09:49:14.453985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.937 [2024-11-19 09:49:14.465888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.937 [2024-11-19 09:49:14.466491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.937 [2024-11-19 09:49:14.466522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.937 [2024-11-19 09:49:14.466530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.937 [2024-11-19 09:49:14.466696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.937 [2024-11-19 09:49:14.466851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.937 [2024-11-19 09:49:14.466858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.937 [2024-11-19 09:49:14.466865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.937 [2024-11-19 09:49:14.466871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.937 [2024-11-19 09:49:14.478636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.937 [2024-11-19 09:49:14.479120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.937 [2024-11-19 09:49:14.479136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.937 [2024-11-19 09:49:14.479141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.937 [2024-11-19 09:49:14.479297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.937 [2024-11-19 09:49:14.479449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.937 [2024-11-19 09:49:14.479454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.937 [2024-11-19 09:49:14.479459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.937 [2024-11-19 09:49:14.479464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.937 [2024-11-19 09:49:14.491354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.937 [2024-11-19 09:49:14.491841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.937 [2024-11-19 09:49:14.491853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.937 [2024-11-19 09:49:14.491858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.937 [2024-11-19 09:49:14.492009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.937 [2024-11-19 09:49:14.492164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.937 [2024-11-19 09:49:14.492170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.937 [2024-11-19 09:49:14.492179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.937 [2024-11-19 09:49:14.492184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.937 [2024-11-19 09:49:14.504073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.937 [2024-11-19 09:49:14.504632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.937 [2024-11-19 09:49:14.504662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.937 [2024-11-19 09:49:14.504671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.937 [2024-11-19 09:49:14.504837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.937 [2024-11-19 09:49:14.504991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.937 [2024-11-19 09:49:14.504997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.937 [2024-11-19 09:49:14.505003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.937 [2024-11-19 09:49:14.505008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.937 [2024-11-19 09:49:14.516761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.937 [2024-11-19 09:49:14.517294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.937 [2024-11-19 09:49:14.517325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.937 [2024-11-19 09:49:14.517333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.937 [2024-11-19 09:49:14.517502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.937 [2024-11-19 09:49:14.517656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.937 [2024-11-19 09:49:14.517663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.937 [2024-11-19 09:49:14.517668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.937 [2024-11-19 09:49:14.517674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.937 [2024-11-19 09:49:14.529423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.937 [2024-11-19 09:49:14.529997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.938 [2024-11-19 09:49:14.530028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.938 [2024-11-19 09:49:14.530036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.938 [2024-11-19 09:49:14.530209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.938 [2024-11-19 09:49:14.530365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.938 [2024-11-19 09:49:14.530371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.938 [2024-11-19 09:49:14.530377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.938 [2024-11-19 09:49:14.530382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.938 [2024-11-19 09:49:14.542131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.938 [2024-11-19 09:49:14.542626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.938 [2024-11-19 09:49:14.542641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.938 [2024-11-19 09:49:14.542647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.938 [2024-11-19 09:49:14.542798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.938 [2024-11-19 09:49:14.542949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.938 [2024-11-19 09:49:14.542955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.938 [2024-11-19 09:49:14.542960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.938 [2024-11-19 09:49:14.542964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.938 [2024-11-19 09:49:14.554839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.938 [2024-11-19 09:49:14.555287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.938 [2024-11-19 09:49:14.555317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.938 [2024-11-19 09:49:14.555326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.938 [2024-11-19 09:49:14.555494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.938 [2024-11-19 09:49:14.555648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.938 [2024-11-19 09:49:14.555654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.938 [2024-11-19 09:49:14.555660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.938 [2024-11-19 09:49:14.555667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.938 [2024-11-19 09:49:14.567555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.938 [2024-11-19 09:49:14.568124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.938 [2024-11-19 09:49:14.568154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.938 [2024-11-19 09:49:14.568169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.938 [2024-11-19 09:49:14.568336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.938 [2024-11-19 09:49:14.568490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.938 [2024-11-19 09:49:14.568496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.938 [2024-11-19 09:49:14.568501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.938 [2024-11-19 09:49:14.568507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.938 [2024-11-19 09:49:14.580245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.938 [2024-11-19 09:49:14.580777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.938 [2024-11-19 09:49:14.580807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.938 [2024-11-19 09:49:14.580819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.938 [2024-11-19 09:49:14.580985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.938 [2024-11-19 09:49:14.581139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.938 [2024-11-19 09:49:14.581145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.938 [2024-11-19 09:49:14.581151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.938 [2024-11-19 09:49:14.581157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.938 [2024-11-19 09:49:14.592908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.938 [2024-11-19 09:49:14.593361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.938 [2024-11-19 09:49:14.593376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.938 [2024-11-19 09:49:14.593382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.938 [2024-11-19 09:49:14.593532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.938 [2024-11-19 09:49:14.593683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.938 [2024-11-19 09:49:14.593689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.938 [2024-11-19 09:49:14.593694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.938 [2024-11-19 09:49:14.593699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.938 [2024-11-19 09:49:14.605574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.938 [2024-11-19 09:49:14.606060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.938 [2024-11-19 09:49:14.606072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.938 [2024-11-19 09:49:14.606077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.938 [2024-11-19 09:49:14.606232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.938 [2024-11-19 09:49:14.606383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.938 [2024-11-19 09:49:14.606389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.938 [2024-11-19 09:49:14.606394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.938 [2024-11-19 09:49:14.606399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.938 [2024-11-19 09:49:14.618281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.938 [2024-11-19 09:49:14.618714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.938 [2024-11-19 09:49:14.618745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.938 [2024-11-19 09:49:14.618754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.938 [2024-11-19 09:49:14.618920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.938 [2024-11-19 09:49:14.619078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.938 [2024-11-19 09:49:14.619085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.938 [2024-11-19 09:49:14.619091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.938 [2024-11-19 09:49:14.619097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.938 [2024-11-19 09:49:14.630988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.938 [2024-11-19 09:49:14.631569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.938 [2024-11-19 09:49:14.631599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.938 [2024-11-19 09:49:14.631608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.938 [2024-11-19 09:49:14.631774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.938 [2024-11-19 09:49:14.631936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.938 [2024-11-19 09:49:14.631943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.938 [2024-11-19 09:49:14.631948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.938 [2024-11-19 09:49:14.631954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.938 [2024-11-19 09:49:14.643713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.938 [2024-11-19 09:49:14.644292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.938 [2024-11-19 09:49:14.644322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.938 [2024-11-19 09:49:14.644331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.938 [2024-11-19 09:49:14.644497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.938 [2024-11-19 09:49:14.644651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.938 [2024-11-19 09:49:14.644657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.938 [2024-11-19 09:49:14.644663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.938 [2024-11-19 09:49:14.644668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.939 [2024-11-19 09:49:14.656449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.939 [2024-11-19 09:49:14.656948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.939 [2024-11-19 09:49:14.656963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.939 [2024-11-19 09:49:14.656969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.939 [2024-11-19 09:49:14.657120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.939 [2024-11-19 09:49:14.657276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.939 [2024-11-19 09:49:14.657282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.939 [2024-11-19 09:49:14.657291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.939 [2024-11-19 09:49:14.657296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:27.939 [2024-11-19 09:49:14.669180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:27.939 [2024-11-19 09:49:14.669721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.939 [2024-11-19 09:49:14.669751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:27.939 [2024-11-19 09:49:14.669759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:27.939 [2024-11-19 09:49:14.669925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:27.939 [2024-11-19 09:49:14.670079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:27.939 [2024-11-19 09:49:14.670085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:27.939 [2024-11-19 09:49:14.670092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:27.939 [2024-11-19 09:49:14.670097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.202 [2024-11-19 09:49:14.681855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.202 [2024-11-19 09:49:14.682422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.202 [2024-11-19 09:49:14.682452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.202 [2024-11-19 09:49:14.682461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.202 [2024-11-19 09:49:14.682628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.202 [2024-11-19 09:49:14.682781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.202 [2024-11-19 09:49:14.682787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.202 [2024-11-19 09:49:14.682793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.202 [2024-11-19 09:49:14.682799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.202 [2024-11-19 09:49:14.694550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.202 [2024-11-19 09:49:14.695129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.202 [2024-11-19 09:49:14.695166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.202 [2024-11-19 09:49:14.695175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.202 [2024-11-19 09:49:14.695344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.202 [2024-11-19 09:49:14.695498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.202 [2024-11-19 09:49:14.695504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.202 [2024-11-19 09:49:14.695510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.202 [2024-11-19 09:49:14.695516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.202 [2024-11-19 09:49:14.707253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.202 [2024-11-19 09:49:14.707836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.202 [2024-11-19 09:49:14.707866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.202 [2024-11-19 09:49:14.707875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.202 [2024-11-19 09:49:14.708041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.202 [2024-11-19 09:49:14.708202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.202 [2024-11-19 09:49:14.708209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.202 [2024-11-19 09:49:14.708215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.202 [2024-11-19 09:49:14.708220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.202 [2024-11-19 09:49:14.719868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.202 [2024-11-19 09:49:14.720377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.202 [2024-11-19 09:49:14.720407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.202 [2024-11-19 09:49:14.720415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.202 [2024-11-19 09:49:14.720581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.202 [2024-11-19 09:49:14.720735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.202 [2024-11-19 09:49:14.720742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.202 [2024-11-19 09:49:14.720747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.202 [2024-11-19 09:49:14.720753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.202 [2024-11-19 09:49:14.732530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.202 [2024-11-19 09:49:14.733109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.202 [2024-11-19 09:49:14.733139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.202 [2024-11-19 09:49:14.733148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.202 [2024-11-19 09:49:14.733325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.202 [2024-11-19 09:49:14.733480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.202 [2024-11-19 09:49:14.733488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.202 [2024-11-19 09:49:14.733495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.202 [2024-11-19 09:49:14.733500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.202 [2024-11-19 09:49:14.745270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.202 [2024-11-19 09:49:14.745834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.202 [2024-11-19 09:49:14.745865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.202 [2024-11-19 09:49:14.745877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.202 [2024-11-19 09:49:14.746043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.202 [2024-11-19 09:49:14.746206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.202 [2024-11-19 09:49:14.746213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.202 [2024-11-19 09:49:14.746219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.202 [2024-11-19 09:49:14.746224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.202 [2024-11-19 09:49:14.757986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.202 [2024-11-19 09:49:14.758577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.202 [2024-11-19 09:49:14.758607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.202 [2024-11-19 09:49:14.758616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.202 [2024-11-19 09:49:14.758782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.202 [2024-11-19 09:49:14.758936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.202 [2024-11-19 09:49:14.758942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.202 [2024-11-19 09:49:14.758948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.202 [2024-11-19 09:49:14.758953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.202 [2024-11-19 09:49:14.770716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.202 [2024-11-19 09:49:14.771153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.202 [2024-11-19 09:49:14.771173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.202 [2024-11-19 09:49:14.771179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.202 [2024-11-19 09:49:14.771330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.202 [2024-11-19 09:49:14.771481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.202 [2024-11-19 09:49:14.771487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.202 [2024-11-19 09:49:14.771492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.202 [2024-11-19 09:49:14.771497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.202 [2024-11-19 09:49:14.783389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.203 [2024-11-19 09:49:14.783868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.203 [2024-11-19 09:49:14.783881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.203 [2024-11-19 09:49:14.783886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.203 [2024-11-19 09:49:14.784037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.203 [2024-11-19 09:49:14.784196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.203 [2024-11-19 09:49:14.784202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.203 [2024-11-19 09:49:14.784207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.203 [2024-11-19 09:49:14.784212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.203 [2024-11-19 09:49:14.796107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.203 [2024-11-19 09:49:14.796490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.203 [2024-11-19 09:49:14.796502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.203 [2024-11-19 09:49:14.796508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.203 [2024-11-19 09:49:14.796658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.203 [2024-11-19 09:49:14.796808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.203 [2024-11-19 09:49:14.796814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.203 [2024-11-19 09:49:14.796819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.203 [2024-11-19 09:49:14.796824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.203 [2024-11-19 09:49:14.808718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.203 [2024-11-19 09:49:14.809184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.203 [2024-11-19 09:49:14.809214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.203 [2024-11-19 09:49:14.809223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.203 [2024-11-19 09:49:14.809389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.203 [2024-11-19 09:49:14.809544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.203 [2024-11-19 09:49:14.809551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.203 [2024-11-19 09:49:14.809557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.203 [2024-11-19 09:49:14.809563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.203 [2024-11-19 09:49:14.821459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.203 [2024-11-19 09:49:14.821952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.203 [2024-11-19 09:49:14.821966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.203 [2024-11-19 09:49:14.821971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.203 [2024-11-19 09:49:14.822122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.203 [2024-11-19 09:49:14.822279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.203 [2024-11-19 09:49:14.822285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.203 [2024-11-19 09:49:14.822294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.203 [2024-11-19 09:49:14.822298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.203 [2024-11-19 09:49:14.834202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.203 [2024-11-19 09:49:14.834762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.203 [2024-11-19 09:49:14.834792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.203 [2024-11-19 09:49:14.834801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.203 [2024-11-19 09:49:14.834967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.203 [2024-11-19 09:49:14.835121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.203 [2024-11-19 09:49:14.835128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.203 [2024-11-19 09:49:14.835133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.203 [2024-11-19 09:49:14.835139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.203 [2024-11-19 09:49:14.846888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.203 [2024-11-19 09:49:14.847445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.203 [2024-11-19 09:49:14.847475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.203 [2024-11-19 09:49:14.847484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.203 [2024-11-19 09:49:14.847650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.203 [2024-11-19 09:49:14.847804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.203 [2024-11-19 09:49:14.847810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.203 [2024-11-19 09:49:14.847816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.203 [2024-11-19 09:49:14.847821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.203 [2024-11-19 09:49:14.859566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.203 [2024-11-19 09:49:14.860171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.203 [2024-11-19 09:49:14.860201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.203 [2024-11-19 09:49:14.860210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.203 [2024-11-19 09:49:14.860379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.203 [2024-11-19 09:49:14.860532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.203 [2024-11-19 09:49:14.860538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.203 [2024-11-19 09:49:14.860544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.203 [2024-11-19 09:49:14.860550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.203 [2024-11-19 09:49:14.872295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.203 [2024-11-19 09:49:14.872795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.203 [2024-11-19 09:49:14.872825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.203 [2024-11-19 09:49:14.872833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.203 [2024-11-19 09:49:14.873002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.203 [2024-11-19 09:49:14.873156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.203 [2024-11-19 09:49:14.873171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.203 [2024-11-19 09:49:14.873176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.204 [2024-11-19 09:49:14.873182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.204 [2024-11-19 09:49:14.884926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.204 [2024-11-19 09:49:14.885338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.204 [2024-11-19 09:49:14.885368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.204 [2024-11-19 09:49:14.885377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.204 [2024-11-19 09:49:14.885543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.204 [2024-11-19 09:49:14.885697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.204 [2024-11-19 09:49:14.885703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.204 [2024-11-19 09:49:14.885709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.204 [2024-11-19 09:49:14.885715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.204 [2024-11-19 09:49:14.897606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.204 [2024-11-19 09:49:14.898205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.204 [2024-11-19 09:49:14.898235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.204 [2024-11-19 09:49:14.898244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.204 [2024-11-19 09:49:14.898413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.204 [2024-11-19 09:49:14.898567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.204 [2024-11-19 09:49:14.898573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.204 [2024-11-19 09:49:14.898579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.204 [2024-11-19 09:49:14.898585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.204 [2024-11-19 09:49:14.910330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.204 [2024-11-19 09:49:14.910906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.204 [2024-11-19 09:49:14.910936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.204 [2024-11-19 09:49:14.910947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.204 [2024-11-19 09:49:14.911114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.204 [2024-11-19 09:49:14.911273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.204 [2024-11-19 09:49:14.911280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.204 [2024-11-19 09:49:14.911286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.204 [2024-11-19 09:49:14.911291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.204 [2024-11-19 09:49:14.923029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.204 [2024-11-19 09:49:14.923565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.204 [2024-11-19 09:49:14.923595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.204 [2024-11-19 09:49:14.923605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.204 [2024-11-19 09:49:14.923771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.204 [2024-11-19 09:49:14.923924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.204 [2024-11-19 09:49:14.923931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.204 [2024-11-19 09:49:14.923936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.204 [2024-11-19 09:49:14.923942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.204 [2024-11-19 09:49:14.935703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.204 [2024-11-19 09:49:14.936198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.204 [2024-11-19 09:49:14.936220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.204 [2024-11-19 09:49:14.936226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.204 [2024-11-19 09:49:14.936383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.204 [2024-11-19 09:49:14.936535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.204 [2024-11-19 09:49:14.936541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.204 [2024-11-19 09:49:14.936546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.204 [2024-11-19 09:49:14.936551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.467 [2024-11-19 09:49:14.948440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.467 [2024-11-19 09:49:14.948883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.467 [2024-11-19 09:49:14.948913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.467 [2024-11-19 09:49:14.948922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.467 [2024-11-19 09:49:14.949088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.467 [2024-11-19 09:49:14.949251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.467 [2024-11-19 09:49:14.949259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.467 [2024-11-19 09:49:14.949264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.467 [2024-11-19 09:49:14.949270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.467 [2024-11-19 09:49:14.961150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.467 [2024-11-19 09:49:14.961732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.467 [2024-11-19 09:49:14.961761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.467 [2024-11-19 09:49:14.961770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.467 [2024-11-19 09:49:14.961936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.467 [2024-11-19 09:49:14.962090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.467 [2024-11-19 09:49:14.962096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.467 [2024-11-19 09:49:14.962102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.467 [2024-11-19 09:49:14.962108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.467 [2024-11-19 09:49:14.973903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.467 [2024-11-19 09:49:14.974367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.467 [2024-11-19 09:49:14.974397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.467 [2024-11-19 09:49:14.974406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.467 [2024-11-19 09:49:14.974575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.467 [2024-11-19 09:49:14.974729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.467 [2024-11-19 09:49:14.974737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.467 [2024-11-19 09:49:14.974744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.467 [2024-11-19 09:49:14.974750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 520150 Killed "${NVMF_APP[@]}" "$@" 00:31:28.467 09:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:31:28.467 09:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:28.467 09:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:28.467 09:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:28.467 09:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:28.467 [2024-11-19 09:49:14.986651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.467 [2024-11-19 09:49:14.987265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.467 [2024-11-19 09:49:14.987296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.467 [2024-11-19 09:49:14.987309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.467 [2024-11-19 09:49:14.987478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.467 [2024-11-19 09:49:14.987632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.467 [2024-11-19 09:49:14.987639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.467 [2024-11-19 09:49:14.987645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.467 [2024-11-19 09:49:14.987651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.467 09:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=521770 00:31:28.467 09:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 521770 00:31:28.467 09:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:28.467 09:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 521770 ']' 00:31:28.467 09:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.467 09:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:28.467 09:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.467 09:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:28.467 09:49:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:28.467 [2024-11-19 09:49:14.999269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.467 [2024-11-19 09:49:14.999853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.467 [2024-11-19 09:49:14.999884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.467 [2024-11-19 09:49:14.999893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.467 [2024-11-19 09:49:15.000059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.467 [2024-11-19 09:49:15.000219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.467 [2024-11-19 09:49:15.000227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.468 [2024-11-19 09:49:15.000233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.468 [2024-11-19 09:49:15.000239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.468 [2024-11-19 09:49:15.011994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.468 [2024-11-19 09:49:15.012485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.468 [2024-11-19 09:49:15.012501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.468 [2024-11-19 09:49:15.012506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.468 [2024-11-19 09:49:15.012657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.468 [2024-11-19 09:49:15.012808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.468 [2024-11-19 09:49:15.012817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.468 [2024-11-19 09:49:15.012823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.468 [2024-11-19 09:49:15.012828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.468 [2024-11-19 09:49:15.024725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.468 [2024-11-19 09:49:15.025169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.468 [2024-11-19 09:49:15.025184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.468 [2024-11-19 09:49:15.025189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.468 [2024-11-19 09:49:15.025340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.468 [2024-11-19 09:49:15.025491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.468 [2024-11-19 09:49:15.025497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.468 [2024-11-19 09:49:15.025502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.468 [2024-11-19 09:49:15.025507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.468 [2024-11-19 09:49:15.037422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.468 [2024-11-19 09:49:15.037985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.468 [2024-11-19 09:49:15.038015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.468 [2024-11-19 09:49:15.038024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.468 [2024-11-19 09:49:15.038196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.468 [2024-11-19 09:49:15.038351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.468 [2024-11-19 09:49:15.038357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.468 [2024-11-19 09:49:15.038363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.468 [2024-11-19 09:49:15.038369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.468 [2024-11-19 09:49:15.041456] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:31:28.468 [2024-11-19 09:49:15.041502] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.468 [2024-11-19 09:49:15.050145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.468 [2024-11-19 09:49:15.050727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.468 [2024-11-19 09:49:15.050757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.468 [2024-11-19 09:49:15.050766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.468 [2024-11-19 09:49:15.050933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.468 [2024-11-19 09:49:15.051087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.468 [2024-11-19 09:49:15.051096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.468 [2024-11-19 09:49:15.051102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.468 [2024-11-19 09:49:15.051108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.468 [2024-11-19 09:49:15.062869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.468 [2024-11-19 09:49:15.063448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.468 [2024-11-19 09:49:15.063478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.468 [2024-11-19 09:49:15.063487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.468 [2024-11-19 09:49:15.063653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.468 [2024-11-19 09:49:15.063807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.468 [2024-11-19 09:49:15.063813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.468 [2024-11-19 09:49:15.063819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.468 [2024-11-19 09:49:15.063825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.468 [2024-11-19 09:49:15.075577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.468 [2024-11-19 09:49:15.076070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.468 [2024-11-19 09:49:15.076085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.468 [2024-11-19 09:49:15.076091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.468 [2024-11-19 09:49:15.076246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.468 [2024-11-19 09:49:15.076398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.468 [2024-11-19 09:49:15.076404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.468 [2024-11-19 09:49:15.076409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.468 [2024-11-19 09:49:15.076414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.468 [2024-11-19 09:49:15.088235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.468 [2024-11-19 09:49:15.088695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.468 [2024-11-19 09:49:15.088710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.468 [2024-11-19 09:49:15.088715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.468 [2024-11-19 09:49:15.088865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.468 [2024-11-19 09:49:15.089016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.468 [2024-11-19 09:49:15.089022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.468 [2024-11-19 09:49:15.089027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.468 [2024-11-19 09:49:15.089035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.468 [2024-11-19 09:49:15.100927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.468 [2024-11-19 09:49:15.101558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.468 [2024-11-19 09:49:15.101589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.468 [2024-11-19 09:49:15.101598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.468 [2024-11-19 09:49:15.101764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.468 [2024-11-19 09:49:15.101918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.468 [2024-11-19 09:49:15.101924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.468 [2024-11-19 09:49:15.101930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.468 [2024-11-19 09:49:15.101936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.468 [2024-11-19 09:49:15.113547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.468 [2024-11-19 09:49:15.114117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.468 [2024-11-19 09:49:15.114147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.468 [2024-11-19 09:49:15.114156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.468 [2024-11-19 09:49:15.114331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.468 [2024-11-19 09:49:15.114486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.468 [2024-11-19 09:49:15.114492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.468 [2024-11-19 09:49:15.114499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.468 [2024-11-19 09:49:15.114504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.468 [2024-11-19 09:49:15.126264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.468 [2024-11-19 09:49:15.126834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.468 [2024-11-19 09:49:15.126865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.469 [2024-11-19 09:49:15.126874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.469 [2024-11-19 09:49:15.127040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.469 [2024-11-19 09:49:15.127199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.469 [2024-11-19 09:49:15.127208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.469 [2024-11-19 09:49:15.127214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.469 [2024-11-19 09:49:15.127220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.469 [2024-11-19 09:49:15.132653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:28.469 [2024-11-19 09:49:15.138995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.469 [2024-11-19 09:49:15.139615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.469 [2024-11-19 09:49:15.139646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.469 [2024-11-19 09:49:15.139656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.469 [2024-11-19 09:49:15.139823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.469 [2024-11-19 09:49:15.139977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.469 [2024-11-19 09:49:15.139984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.469 [2024-11-19 09:49:15.139990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.469 [2024-11-19 09:49:15.139996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.469 [2024-11-19 09:49:15.151617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.469 [2024-11-19 09:49:15.152212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.469 [2024-11-19 09:49:15.152242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.469 [2024-11-19 09:49:15.152252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.469 [2024-11-19 09:49:15.152420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.469 [2024-11-19 09:49:15.152576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.469 [2024-11-19 09:49:15.152582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.469 [2024-11-19 09:49:15.152588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.469 [2024-11-19 09:49:15.152594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.469 [2024-11-19 09:49:15.161678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.469 [2024-11-19 09:49:15.161699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.469 [2024-11-19 09:49:15.161706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:28.469 [2024-11-19 09:49:15.161712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:28.469 [2024-11-19 09:49:15.161716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.469 [2024-11-19 09:49:15.162948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:28.469 [2024-11-19 09:49:15.163096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.469 [2024-11-19 09:49:15.163098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:28.469 [2024-11-19 09:49:15.164360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.469 [2024-11-19 09:49:15.164904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.469 [2024-11-19 09:49:15.164934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.469 [2024-11-19 09:49:15.164943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.469 [2024-11-19 09:49:15.165110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.469 [2024-11-19 09:49:15.165270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.469 [2024-11-19 09:49:15.165281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.469 [2024-11-19 09:49:15.165287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.469 [2024-11-19 09:49:15.165293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.469 [2024-11-19 09:49:15.177052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.469 [2024-11-19 09:49:15.177676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.469 [2024-11-19 09:49:15.177707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.469 [2024-11-19 09:49:15.177716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.469 [2024-11-19 09:49:15.177884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.469 [2024-11-19 09:49:15.178038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.469 [2024-11-19 09:49:15.178044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.469 [2024-11-19 09:49:15.178050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.469 [2024-11-19 09:49:15.178056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.469 [2024-11-19 09:49:15.189679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.469 [2024-11-19 09:49:15.190174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.469 [2024-11-19 09:49:15.190205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.469 [2024-11-19 09:49:15.190214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.469 [2024-11-19 09:49:15.190383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.469 [2024-11-19 09:49:15.190538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.469 [2024-11-19 09:49:15.190544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.469 [2024-11-19 09:49:15.190550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.469 [2024-11-19 09:49:15.190556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.469 [2024-11-19 09:49:15.202319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.469 [2024-11-19 09:49:15.202856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.469 [2024-11-19 09:49:15.202871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.469 [2024-11-19 09:49:15.202877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.469 [2024-11-19 09:49:15.203029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.469 [2024-11-19 09:49:15.203185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.469 [2024-11-19 09:49:15.203192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.469 [2024-11-19 09:49:15.203198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.469 [2024-11-19 09:49:15.203209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.732 [2024-11-19 09:49:15.214958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.732 [2024-11-19 09:49:15.215431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.732 [2024-11-19 09:49:15.215445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.732 [2024-11-19 09:49:15.215450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.732 [2024-11-19 09:49:15.215601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.732 [2024-11-19 09:49:15.215752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.732 [2024-11-19 09:49:15.215758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.732 [2024-11-19 09:49:15.215763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.732 [2024-11-19 09:49:15.215768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.732 [2024-11-19 09:49:15.227714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.732 [2024-11-19 09:49:15.228135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.732 [2024-11-19 09:49:15.228171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.732 [2024-11-19 09:49:15.228180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.732 [2024-11-19 09:49:15.228347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.732 [2024-11-19 09:49:15.228501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.732 [2024-11-19 09:49:15.228508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.732 [2024-11-19 09:49:15.228513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.732 [2024-11-19 09:49:15.228520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.732 [2024-11-19 09:49:15.240439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.732 [2024-11-19 09:49:15.240955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.732 [2024-11-19 09:49:15.240970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.732 [2024-11-19 09:49:15.240976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.732 [2024-11-19 09:49:15.241127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.732 [2024-11-19 09:49:15.241284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.732 [2024-11-19 09:49:15.241291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.732 [2024-11-19 09:49:15.241296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.732 [2024-11-19 09:49:15.241301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.732 [2024-11-19 09:49:15.253191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.732 [2024-11-19 09:49:15.253802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.732 [2024-11-19 09:49:15.253832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.732 [2024-11-19 09:49:15.253841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.732 [2024-11-19 09:49:15.254008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.732 [2024-11-19 09:49:15.254169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.732 [2024-11-19 09:49:15.254175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.732 [2024-11-19 09:49:15.254181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.732 [2024-11-19 09:49:15.254187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.732 [2024-11-19 09:49:15.265939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.732 [2024-11-19 09:49:15.266559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.732 [2024-11-19 09:49:15.266589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.732 [2024-11-19 09:49:15.266598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.732 [2024-11-19 09:49:15.266765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.732 [2024-11-19 09:49:15.266919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.732 [2024-11-19 09:49:15.266925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.732 [2024-11-19 09:49:15.266931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.732 [2024-11-19 09:49:15.266937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.732 [2024-11-19 09:49:15.278689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.732 [2024-11-19 09:49:15.279262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.732 [2024-11-19 09:49:15.279293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.732 [2024-11-19 09:49:15.279301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.732 [2024-11-19 09:49:15.279470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.732 [2024-11-19 09:49:15.279624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.732 [2024-11-19 09:49:15.279630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.732 [2024-11-19 09:49:15.279635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.732 [2024-11-19 09:49:15.279641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.732 [2024-11-19 09:49:15.291404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.732 [2024-11-19 09:49:15.291917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.732 [2024-11-19 09:49:15.291932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.732 [2024-11-19 09:49:15.291938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.733 [2024-11-19 09:49:15.292093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.733 [2024-11-19 09:49:15.292250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.733 [2024-11-19 09:49:15.292257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.733 [2024-11-19 09:49:15.292262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.733 [2024-11-19 09:49:15.292267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.733 [2024-11-19 09:49:15.304149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.733 [2024-11-19 09:49:15.304661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.733 [2024-11-19 09:49:15.304674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.733 [2024-11-19 09:49:15.304680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.733 [2024-11-19 09:49:15.304830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.733 [2024-11-19 09:49:15.304980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.733 [2024-11-19 09:49:15.304986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.733 [2024-11-19 09:49:15.304991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.733 [2024-11-19 09:49:15.304996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.733 [2024-11-19 09:49:15.316887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.733 [2024-11-19 09:49:15.317461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.733 [2024-11-19 09:49:15.317492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.733 [2024-11-19 09:49:15.317500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.733 [2024-11-19 09:49:15.317667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.733 [2024-11-19 09:49:15.317821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.733 [2024-11-19 09:49:15.317828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.733 [2024-11-19 09:49:15.317834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.733 [2024-11-19 09:49:15.317839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.733 [2024-11-19 09:49:15.329597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.733 [2024-11-19 09:49:15.330074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.733 [2024-11-19 09:49:15.330104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.733 [2024-11-19 09:49:15.330113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.733 [2024-11-19 09:49:15.330285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.733 [2024-11-19 09:49:15.330440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.733 [2024-11-19 09:49:15.330450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.733 [2024-11-19 09:49:15.330456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.733 [2024-11-19 09:49:15.330461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.733 [2024-11-19 09:49:15.342233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.733 [2024-11-19 09:49:15.342752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.733 [2024-11-19 09:49:15.342766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.733 [2024-11-19 09:49:15.342772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.733 [2024-11-19 09:49:15.342922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.733 [2024-11-19 09:49:15.343073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.733 [2024-11-19 09:49:15.343079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.733 [2024-11-19 09:49:15.343085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.733 [2024-11-19 09:49:15.343089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.733 [2024-11-19 09:49:15.354839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.733 [2024-11-19 09:49:15.355281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.733 [2024-11-19 09:49:15.355312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.733 [2024-11-19 09:49:15.355320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.733 [2024-11-19 09:49:15.355489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.733 [2024-11-19 09:49:15.355644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.733 [2024-11-19 09:49:15.355650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.733 [2024-11-19 09:49:15.355656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.733 [2024-11-19 09:49:15.355661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.733 [2024-11-19 09:49:15.367555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.733 [2024-11-19 09:49:15.368071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.733 [2024-11-19 09:49:15.368086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.733 [2024-11-19 09:49:15.368092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.733 [2024-11-19 09:49:15.368248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.733 [2024-11-19 09:49:15.368399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.733 [2024-11-19 09:49:15.368405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.733 [2024-11-19 09:49:15.368410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.733 [2024-11-19 09:49:15.368418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.733 [2024-11-19 09:49:15.380165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.733 [2024-11-19 09:49:15.380750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.733 [2024-11-19 09:49:15.380781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.733 [2024-11-19 09:49:15.380790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.733 [2024-11-19 09:49:15.380956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.733 [2024-11-19 09:49:15.381110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.733 [2024-11-19 09:49:15.381117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.733 [2024-11-19 09:49:15.381123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.733 [2024-11-19 09:49:15.381129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.733 [2024-11-19 09:49:15.392881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.733 [2024-11-19 09:49:15.393472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.733 [2024-11-19 09:49:15.393502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.733 [2024-11-19 09:49:15.393514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.733 [2024-11-19 09:49:15.393680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.733 [2024-11-19 09:49:15.393834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.733 [2024-11-19 09:49:15.393841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.733 [2024-11-19 09:49:15.393847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.733 [2024-11-19 09:49:15.393853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.733 [2024-11-19 09:49:15.405608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.733 [2024-11-19 09:49:15.405949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.733 [2024-11-19 09:49:15.405963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.733 [2024-11-19 09:49:15.405969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.733 [2024-11-19 09:49:15.406120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.733 [2024-11-19 09:49:15.406276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.734 [2024-11-19 09:49:15.406283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.734 [2024-11-19 09:49:15.406288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.734 [2024-11-19 09:49:15.406293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.734 [2024-11-19 09:49:15.418326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.734 [2024-11-19 09:49:15.418909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.734 [2024-11-19 09:49:15.418938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.734 [2024-11-19 09:49:15.418947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.734 [2024-11-19 09:49:15.419114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.734 [2024-11-19 09:49:15.419274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.734 [2024-11-19 09:49:15.419281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.734 [2024-11-19 09:49:15.419287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.734 [2024-11-19 09:49:15.419293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.734 4733.67 IOPS, 18.49 MiB/s [2024-11-19T08:49:15.482Z] [2024-11-19 09:49:15.431040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.734 [2024-11-19 09:49:15.431680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.734 [2024-11-19 09:49:15.431711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.734 [2024-11-19 09:49:15.431720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.734 [2024-11-19 09:49:15.431886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.734 [2024-11-19 09:49:15.432041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.734 [2024-11-19 09:49:15.432047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.734 [2024-11-19 09:49:15.432053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.734 [2024-11-19 09:49:15.432059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.734 [2024-11-19 09:49:15.443690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.734 [2024-11-19 09:49:15.444264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.734 [2024-11-19 09:49:15.444293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.734 [2024-11-19 09:49:15.444302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.734 [2024-11-19 09:49:15.444471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.734 [2024-11-19 09:49:15.444625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.734 [2024-11-19 09:49:15.444631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.734 [2024-11-19 09:49:15.444637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.734 [2024-11-19 09:49:15.444643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.734 [2024-11-19 09:49:15.456402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.734 [2024-11-19 09:49:15.456898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.734 [2024-11-19 09:49:15.456913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.734 [2024-11-19 09:49:15.456918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.734 [2024-11-19 09:49:15.457076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.734 [2024-11-19 09:49:15.457231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.734 [2024-11-19 09:49:15.457237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.734 [2024-11-19 09:49:15.457243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.734 [2024-11-19 09:49:15.457247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.734 [2024-11-19 09:49:15.469133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.734 [2024-11-19 09:49:15.469607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.734 [2024-11-19 09:49:15.469620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.734 [2024-11-19 09:49:15.469625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.734 [2024-11-19 09:49:15.469776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.734 [2024-11-19 09:49:15.469926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.734 [2024-11-19 09:49:15.469932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.734 [2024-11-19 09:49:15.469937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.734 [2024-11-19 09:49:15.469942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.996 [2024-11-19 09:49:15.481832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.996 [2024-11-19 09:49:15.482420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.996 [2024-11-19 09:49:15.482451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.996 [2024-11-19 09:49:15.482460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.996 [2024-11-19 09:49:15.482626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.996 [2024-11-19 09:49:15.482780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.996 [2024-11-19 09:49:15.482787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.996 [2024-11-19 09:49:15.482792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.996 [2024-11-19 09:49:15.482798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.996 [2024-11-19 09:49:15.494553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.996 [2024-11-19 09:49:15.495147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.996 [2024-11-19 09:49:15.495183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.996 [2024-11-19 09:49:15.495192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.996 [2024-11-19 09:49:15.495359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.996 [2024-11-19 09:49:15.495513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.996 [2024-11-19 09:49:15.495523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.996 [2024-11-19 09:49:15.495529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.997 [2024-11-19 09:49:15.495534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.997 [2024-11-19 09:49:15.507293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.997 [2024-11-19 09:49:15.507860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.997 [2024-11-19 09:49:15.507891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.997 [2024-11-19 09:49:15.507899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.997 [2024-11-19 09:49:15.508066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.997 [2024-11-19 09:49:15.508226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.997 [2024-11-19 09:49:15.508234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.997 [2024-11-19 09:49:15.508240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.997 [2024-11-19 09:49:15.508246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.997 [2024-11-19 09:49:15.519999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.997 [2024-11-19 09:49:15.520535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.997 [2024-11-19 09:49:15.520566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.997 [2024-11-19 09:49:15.520575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.997 [2024-11-19 09:49:15.520741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.997 [2024-11-19 09:49:15.520895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.997 [2024-11-19 09:49:15.520901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.997 [2024-11-19 09:49:15.520906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.997 [2024-11-19 09:49:15.520912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.997 [2024-11-19 09:49:15.532667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.997 [2024-11-19 09:49:15.533061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.997 [2024-11-19 09:49:15.533077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.997 [2024-11-19 09:49:15.533082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.997 [2024-11-19 09:49:15.533246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.997 [2024-11-19 09:49:15.533398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.997 [2024-11-19 09:49:15.533404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.997 [2024-11-19 09:49:15.533410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.997 [2024-11-19 09:49:15.533418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.997 [2024-11-19 09:49:15.545320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.997 [2024-11-19 09:49:15.545636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.997 [2024-11-19 09:49:15.545649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.997 [2024-11-19 09:49:15.545654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.997 [2024-11-19 09:49:15.545805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.997 [2024-11-19 09:49:15.545955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.997 [2024-11-19 09:49:15.545960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.997 [2024-11-19 09:49:15.545966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.997 [2024-11-19 09:49:15.545970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.997 [2024-11-19 09:49:15.558005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.997 [2024-11-19 09:49:15.558482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.997 [2024-11-19 09:49:15.558495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.997 [2024-11-19 09:49:15.558501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.997 [2024-11-19 09:49:15.558651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.997 [2024-11-19 09:49:15.558802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.997 [2024-11-19 09:49:15.558808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.997 [2024-11-19 09:49:15.558813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.997 [2024-11-19 09:49:15.558818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.997 [2024-11-19 09:49:15.570698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.997 [2024-11-19 09:49:15.571155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.997 [2024-11-19 09:49:15.571171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.997 [2024-11-19 09:49:15.571176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.997 [2024-11-19 09:49:15.571327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.997 [2024-11-19 09:49:15.571477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.997 [2024-11-19 09:49:15.571483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.997 [2024-11-19 09:49:15.571488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.997 [2024-11-19 09:49:15.571493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.997 [2024-11-19 09:49:15.583381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.997 [2024-11-19 09:49:15.583842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.997 [2024-11-19 09:49:15.583854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.997 [2024-11-19 09:49:15.583859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.997 [2024-11-19 09:49:15.584009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.997 [2024-11-19 09:49:15.584165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.997 [2024-11-19 09:49:15.584172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.997 [2024-11-19 09:49:15.584177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.997 [2024-11-19 09:49:15.584181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.997 [2024-11-19 09:49:15.596065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.997 [2024-11-19 09:49:15.596651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.997 [2024-11-19 09:49:15.596681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.997 [2024-11-19 09:49:15.596690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.997 [2024-11-19 09:49:15.596857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.997 [2024-11-19 09:49:15.597011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.997 [2024-11-19 09:49:15.597018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.997 [2024-11-19 09:49:15.597023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.997 [2024-11-19 09:49:15.597029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.997 [2024-11-19 09:49:15.608791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.997 [2024-11-19 09:49:15.609466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.998 [2024-11-19 09:49:15.609496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.998 [2024-11-19 09:49:15.609505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.998 [2024-11-19 09:49:15.609672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.998 [2024-11-19 09:49:15.609826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.998 [2024-11-19 09:49:15.609832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.998 [2024-11-19 09:49:15.609838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.998 [2024-11-19 09:49:15.609843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.998 [2024-11-19 09:49:15.621460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.998 [2024-11-19 09:49:15.622054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.998 [2024-11-19 09:49:15.622083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.998 [2024-11-19 09:49:15.622092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.998 [2024-11-19 09:49:15.622268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.998 [2024-11-19 09:49:15.622423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.998 [2024-11-19 09:49:15.622430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.998 [2024-11-19 09:49:15.622436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.998 [2024-11-19 09:49:15.622441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.998 [2024-11-19 09:49:15.634200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.998 [2024-11-19 09:49:15.634781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.998 [2024-11-19 09:49:15.634811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.998 [2024-11-19 09:49:15.634820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.998 [2024-11-19 09:49:15.634986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.998 [2024-11-19 09:49:15.635148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.998 [2024-11-19 09:49:15.635155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.998 [2024-11-19 09:49:15.635168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.998 [2024-11-19 09:49:15.635174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.998 [2024-11-19 09:49:15.646921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.998 [2024-11-19 09:49:15.647382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.998 [2024-11-19 09:49:15.647412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.998 [2024-11-19 09:49:15.647421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.998 [2024-11-19 09:49:15.647587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.998 [2024-11-19 09:49:15.647741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.998 [2024-11-19 09:49:15.647747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.998 [2024-11-19 09:49:15.647753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.998 [2024-11-19 09:49:15.647759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.998 [2024-11-19 09:49:15.659660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.998 [2024-11-19 09:49:15.660253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.998 [2024-11-19 09:49:15.660284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.998 [2024-11-19 09:49:15.660293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.998 [2024-11-19 09:49:15.660462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.998 [2024-11-19 09:49:15.660616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.998 [2024-11-19 09:49:15.660626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.998 [2024-11-19 09:49:15.660631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.998 [2024-11-19 09:49:15.660637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.998 [2024-11-19 09:49:15.672400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.998 [2024-11-19 09:49:15.672861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.998 [2024-11-19 09:49:15.672876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.998 [2024-11-19 09:49:15.672881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.998 [2024-11-19 09:49:15.673032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.998 [2024-11-19 09:49:15.673188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.998 [2024-11-19 09:49:15.673194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.998 [2024-11-19 09:49:15.673200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.998 [2024-11-19 09:49:15.673213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.998 [2024-11-19 09:49:15.685039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.998 [2024-11-19 09:49:15.685515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.998 [2024-11-19 09:49:15.685545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.998 [2024-11-19 09:49:15.685554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.998 [2024-11-19 09:49:15.685723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.998 [2024-11-19 09:49:15.685877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.998 [2024-11-19 09:49:15.685884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.998 [2024-11-19 09:49:15.685890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.998 [2024-11-19 09:49:15.685896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.998 [2024-11-19 09:49:15.697649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.998 [2024-11-19 09:49:15.698163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.998 [2024-11-19 09:49:15.698179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.998 [2024-11-19 09:49:15.698184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.998 [2024-11-19 09:49:15.698335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.998 [2024-11-19 09:49:15.698486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.998 [2024-11-19 09:49:15.698492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.998 [2024-11-19 09:49:15.698497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.998 [2024-11-19 09:49:15.698506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.998 [2024-11-19 09:49:15.710393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.998 [2024-11-19 09:49:15.710901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.998 [2024-11-19 09:49:15.710914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.998 [2024-11-19 09:49:15.710920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.998 [2024-11-19 09:49:15.711070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.998 [2024-11-19 09:49:15.711226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.999 [2024-11-19 09:49:15.711233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.999 [2024-11-19 09:49:15.711238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.999 [2024-11-19 09:49:15.711242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.999 [2024-11-19 09:49:15.723118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.999 [2024-11-19 09:49:15.723566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.999 [2024-11-19 09:49:15.723578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.999 [2024-11-19 09:49:15.723583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.999 [2024-11-19 09:49:15.723733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.999 [2024-11-19 09:49:15.723884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.999 [2024-11-19 09:49:15.723890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.999 [2024-11-19 09:49:15.723895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.999 [2024-11-19 09:49:15.723900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:28.999 [2024-11-19 09:49:15.735791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:28.999 [2024-11-19 09:49:15.736130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.999 [2024-11-19 09:49:15.736142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:28.999 [2024-11-19 09:49:15.736148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:28.999 [2024-11-19 09:49:15.736302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:28.999 [2024-11-19 09:49:15.736453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:28.999 [2024-11-19 09:49:15.736459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:28.999 [2024-11-19 09:49:15.736464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:28.999 [2024-11-19 09:49:15.736469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:29.260 [2024-11-19 09:49:15.748490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:29.260 [2024-11-19 09:49:15.748949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.260 [2024-11-19 09:49:15.748961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:29.260 [2024-11-19 09:49:15.748967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:29.260 [2024-11-19 09:49:15.749117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:29.260 [2024-11-19 09:49:15.749271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:29.260 [2024-11-19 09:49:15.749277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:29.260 [2024-11-19 09:49:15.749282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:29.261 [2024-11-19 09:49:15.749287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:29.261 [2024-11-19 09:49:15.761161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:29.261 [2024-11-19 09:49:15.761617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.261 [2024-11-19 09:49:15.761629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:29.261 [2024-11-19 09:49:15.761634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:29.261 [2024-11-19 09:49:15.761784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:29.261 [2024-11-19 09:49:15.761935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:29.261 [2024-11-19 09:49:15.761941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:29.261 [2024-11-19 09:49:15.761946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:29.261 [2024-11-19 09:49:15.761950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:29.261 [2024-11-19 09:49:15.773829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:29.261 [2024-11-19 09:49:15.774417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.261 [2024-11-19 09:49:15.774448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:29.261 [2024-11-19 09:49:15.774456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:29.261 [2024-11-19 09:49:15.774623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:29.261 [2024-11-19 09:49:15.774777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:29.261 [2024-11-19 09:49:15.774784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:29.261 [2024-11-19 09:49:15.774790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:29.261 [2024-11-19 09:49:15.774796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:29.261 [2024-11-19 09:49:15.786545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:29.261 [2024-11-19 09:49:15.787063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.261 [2024-11-19 09:49:15.787093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:29.261 [2024-11-19 09:49:15.787103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:29.261 [2024-11-19 09:49:15.787282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:29.261 [2024-11-19 09:49:15.787438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:29.261 [2024-11-19 09:49:15.787444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:29.261 [2024-11-19 09:49:15.787450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:29.261 [2024-11-19 09:49:15.787456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:29.261 [2024-11-19 09:49:15.799200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:29.261 [2024-11-19 09:49:15.799693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.261 [2024-11-19 09:49:15.799723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:29.261 [2024-11-19 09:49:15.799733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:29.261 [2024-11-19 09:49:15.799899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:29.261 [2024-11-19 09:49:15.800053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:29.261 [2024-11-19 09:49:15.800059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:29.261 [2024-11-19 09:49:15.800065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:29.261 [2024-11-19 09:49:15.800071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:29.261 [2024-11-19 09:49:15.811820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:29.261 [2024-11-19 09:49:15.812284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.261 [2024-11-19 09:49:15.812313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:29.261 [2024-11-19 09:49:15.812322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:29.261 [2024-11-19 09:49:15.812490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:29.261 [2024-11-19 09:49:15.812644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:29.261 [2024-11-19 09:49:15.812650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:29.261 [2024-11-19 09:49:15.812656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:29.261 [2024-11-19 09:49:15.812661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:29.261 [2024-11-19 09:49:15.824556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:29.261 [2024-11-19 09:49:15.825120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.261 [2024-11-19 09:49:15.825150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:29.261 [2024-11-19 09:49:15.825164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:29.261 [2024-11-19 09:49:15.825333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:29.261 [2024-11-19 09:49:15.825487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:29.261 [2024-11-19 09:49:15.825497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:29.261 [2024-11-19 09:49:15.825502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:29.261 [2024-11-19 09:49:15.825508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:29.261 [2024-11-19 09:49:15.837267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:29.261 [2024-11-19 09:49:15.837770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.261 [2024-11-19 09:49:15.837800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:29.261 [2024-11-19 09:49:15.837809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:29.261 [2024-11-19 09:49:15.837975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:29.261 [2024-11-19 09:49:15.838129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:29.261 [2024-11-19 09:49:15.838136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:29.261 [2024-11-19 09:49:15.838141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:29.261 [2024-11-19 09:49:15.838147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:29.261 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:29.261 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:31:29.261 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:29.261 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:29.261 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:29.261 [2024-11-19 09:49:15.849899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:29.261 [2024-11-19 09:49:15.850470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.262 [2024-11-19 09:49:15.850501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:29.262 [2024-11-19 09:49:15.850510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:29.262 [2024-11-19 09:49:15.850676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:29.262 [2024-11-19 09:49:15.850831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:29.262 [2024-11-19 09:49:15.850837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:29.262 [2024-11-19 09:49:15.850843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:29.262 [2024-11-19 09:49:15.850849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:29.262 [2024-11-19 09:49:15.862599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:29.262 [2024-11-19 09:49:15.863100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.262 [2024-11-19 09:49:15.863115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:29.262 [2024-11-19 09:49:15.863120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:29.262 [2024-11-19 09:49:15.863276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:29.262 [2024-11-19 09:49:15.863432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:29.262 [2024-11-19 09:49:15.863438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:29.262 [2024-11-19 09:49:15.863444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:29.262 [2024-11-19 09:49:15.863449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:29.262 [2024-11-19 09:49:15.875221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:29.262 [2024-11-19 09:49:15.875706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.262 [2024-11-19 09:49:15.875719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:29.262 [2024-11-19 09:49:15.875726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:29.262 [2024-11-19 09:49:15.875877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:29.262 [2024-11-19 09:49:15.876028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:29.262 [2024-11-19 09:49:15.876034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:29.262 [2024-11-19 09:49:15.876039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:29.262 [2024-11-19 09:49:15.876043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:29.262 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:29.262 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:29.262 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.262 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:29.262 [2024-11-19 09:49:15.886651] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:29.262 [2024-11-19 09:49:15.887925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:29.262 [2024-11-19 09:49:15.888414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.262 [2024-11-19 09:49:15.888426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:29.262 [2024-11-19 09:49:15.888432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:29.262 [2024-11-19 09:49:15.888582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:29.262 [2024-11-19 09:49:15.888733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:29.262 [2024-11-19 09:49:15.888738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:29.262 [2024-11-19 09:49:15.888743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:29.262 [2024-11-19 09:49:15.888748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:29.262 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.262 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:29.262 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.262 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:29.262 [2024-11-19 09:49:15.900628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:29.262 [2024-11-19 09:49:15.901091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.262 [2024-11-19 09:49:15.901103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:29.262 [2024-11-19 09:49:15.901109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:29.262 [2024-11-19 09:49:15.901264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:29.262 [2024-11-19 09:49:15.901415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:29.262 [2024-11-19 09:49:15.901421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:29.262 [2024-11-19 09:49:15.901426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:29.262 [2024-11-19 09:49:15.901431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:29.262 [2024-11-19 09:49:15.913309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:29.262 [2024-11-19 09:49:15.913874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.262 [2024-11-19 09:49:15.913904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:29.262 [2024-11-19 09:49:15.913914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:29.262 [2024-11-19 09:49:15.914081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:29.262 [2024-11-19 09:49:15.914244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:29.262 [2024-11-19 09:49:15.914251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:29.262 [2024-11-19 09:49:15.914257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:29.262 [2024-11-19 09:49:15.914263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:29.262 Malloc0 00:31:29.262 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.262 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:29.262 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.262 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:29.262 [2024-11-19 09:49:15.926002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:29.262 [2024-11-19 09:49:15.926451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.262 [2024-11-19 09:49:15.926466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:29.262 [2024-11-19 09:49:15.926472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:29.262 [2024-11-19 09:49:15.926622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:29.263 [2024-11-19 09:49:15.926773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:29.263 [2024-11-19 09:49:15.926779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:29.263 [2024-11-19 09:49:15.926784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:29.263 [2024-11-19 09:49:15.926793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:29.263 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.263 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:29.263 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.263 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:29.263 [2024-11-19 09:49:15.938757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:29.263 [2024-11-19 09:49:15.939274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.263 [2024-11-19 09:49:15.939304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae000 with addr=10.0.0.2, port=4420 00:31:29.263 [2024-11-19 09:49:15.939313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae000 is same with the state(6) to be set 00:31:29.263 [2024-11-19 09:49:15.939482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae000 (9): Bad file descriptor 00:31:29.263 [2024-11-19 09:49:15.939636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:29.263 [2024-11-19 09:49:15.939642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:29.263 [2024-11-19 09:49:15.939648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:29.263 [2024-11-19 09:49:15.939654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:29.263 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.263 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:29.263 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.263 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:29.263 [2024-11-19 09:49:15.947573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:29.263 [2024-11-19 09:49:15.951401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:29.263 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.263 09:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 520740 00:31:29.263 [2024-11-19 09:49:15.986704] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:31:30.774 4854.71 IOPS, 18.96 MiB/s [2024-11-19T08:49:18.462Z] 5838.25 IOPS, 22.81 MiB/s [2024-11-19T08:49:19.845Z] 6618.11 IOPS, 25.85 MiB/s [2024-11-19T08:49:20.784Z] 7250.00 IOPS, 28.32 MiB/s [2024-11-19T08:49:21.724Z] 7754.36 IOPS, 30.29 MiB/s [2024-11-19T08:49:22.665Z] 8178.08 IOPS, 31.95 MiB/s [2024-11-19T08:49:23.603Z] 8540.00 IOPS, 33.36 MiB/s [2024-11-19T08:49:24.543Z] 8847.64 IOPS, 34.56 MiB/s 00:31:37.795 Latency(us) 00:31:37.795 [2024-11-19T08:49:24.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.795 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:37.795 Verification LBA range: start 0x0 length 0x4000 00:31:37.795 Nvme1n1 : 15.00 9106.68 35.57 13297.59 0.00 5694.65 556.37 14527.15 00:31:37.795 [2024-11-19T08:49:24.543Z] =================================================================================================================== 00:31:37.795 [2024-11-19T08:49:24.543Z] Total : 9106.68 35.57 13297.59 0.00 5694.65 556.37 14527.15 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:38.056 rmmod nvme_tcp 00:31:38.056 rmmod nvme_fabrics 00:31:38.056 rmmod nvme_keyring 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 521770 ']' 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 521770 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 521770 ']' 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 521770 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 521770 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 521770' 00:31:38.056 killing process with pid 521770 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 521770 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 521770 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:38.056 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:31:38.317 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:38.317 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:38.317 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.317 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.317 09:49:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.232 09:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:40.232 00:31:40.232 real 0m28.272s 00:31:40.232 user 1m3.620s 00:31:40.232 sys 0m7.650s 00:31:40.232 09:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.232 09:49:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:40.232 ************************************ 00:31:40.232 END TEST nvmf_bdevperf 00:31:40.232 ************************************ 00:31:40.232 09:49:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:40.232 09:49:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:40.232 09:49:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.232 09:49:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.232 ************************************ 00:31:40.232 START TEST nvmf_target_disconnect 00:31:40.232 ************************************ 00:31:40.232 09:49:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:40.494 * Looking for test storage... 00:31:40.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:40.494 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:40.494 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:31:40.494 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:40.494 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:40.494 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.494 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:40.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.495 --rc genhtml_branch_coverage=1 00:31:40.495 --rc genhtml_function_coverage=1 00:31:40.495 --rc genhtml_legend=1 00:31:40.495 --rc geninfo_all_blocks=1 00:31:40.495 --rc geninfo_unexecuted_blocks=1 00:31:40.495 00:31:40.495 ' 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:40.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.495 --rc genhtml_branch_coverage=1 00:31:40.495 --rc genhtml_function_coverage=1 00:31:40.495 --rc genhtml_legend=1 00:31:40.495 --rc geninfo_all_blocks=1 00:31:40.495 --rc geninfo_unexecuted_blocks=1 00:31:40.495 00:31:40.495 ' 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:40.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.495 --rc genhtml_branch_coverage=1 00:31:40.495 --rc genhtml_function_coverage=1 00:31:40.495 --rc genhtml_legend=1 00:31:40.495 --rc geninfo_all_blocks=1 00:31:40.495 --rc geninfo_unexecuted_blocks=1 00:31:40.495 00:31:40.495 ' 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:40.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.495 --rc genhtml_branch_coverage=1 00:31:40.495 --rc genhtml_function_coverage=1 00:31:40.495 --rc genhtml_legend=1 00:31:40.495 --rc geninfo_all_blocks=1 00:31:40.495 --rc geninfo_unexecuted_blocks=1 00:31:40.495 00:31:40.495 ' 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.495 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:40.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:31:40.496 09:49:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:48.639 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:48.639 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.639 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:48.640 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:48.640 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:48.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:31:48.640 00:31:48.640 --- 10.0.0.2 ping statistics --- 00:31:48.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.640 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:48.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:31:48.640 00:31:48.640 --- 10.0.0.1 ping statistics --- 00:31:48.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.640 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:48.640 ************************************ 00:31:48.640 START TEST nvmf_target_disconnect_tc1 00:31:48.640 ************************************ 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:48.640 [2024-11-19 09:49:34.841716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.640 [2024-11-19 09:49:34.841785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c13ad0 with addr=10.0.0.2, port=4420 00:31:48.640 [2024-11-19 09:49:34.841809] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:48.640 [2024-11-19 09:49:34.841820] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:48.640 [2024-11-19 09:49:34.841828] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:31:48.640 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:48.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:48.640 Initializing NVMe Controllers 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:48.640 00:31:48.640 real 0m0.154s 00:31:48.640 user 0m0.071s 00:31:48.640 sys 0m0.082s 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:48.640 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:48.641 ************************************ 00:31:48.641 END TEST nvmf_target_disconnect_tc1 00:31:48.641 ************************************ 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:48.641 ************************************ 00:31:48.641 START TEST nvmf_target_disconnect_tc2 00:31:48.641 ************************************ 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=527918 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 527918 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 527918 ']' 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.641 09:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:48.641 [2024-11-19 09:49:35.004137] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:31:48.641 [2024-11-19 09:49:35.004202] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.641 [2024-11-19 09:49:35.103990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:48.641 [2024-11-19 09:49:35.156590] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.641 [2024-11-19 09:49:35.156643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.641 [2024-11-19 09:49:35.156651] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:48.641 [2024-11-19 09:49:35.156658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:48.641 [2024-11-19 09:49:35.156665] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.641 [2024-11-19 09:49:35.158698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:48.641 [2024-11-19 09:49:35.158860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:48.641 [2024-11-19 09:49:35.159024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:48.641 [2024-11-19 09:49:35.159025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:49.212 Malloc0 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:49.212 [2024-11-19 09:49:35.917688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:49.212 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.213 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:49.213 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.213 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:49.474 [2024-11-19 09:49:35.958049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.474 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.474 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:49.474 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.474 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:49.474 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.474 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=528149 00:31:49.474 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:31:49.474 09:49:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:51.395 09:49:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 527918 00:31:51.395 09:49:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Write completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Write completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Write completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Write completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Write completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 Read completed with error (sct=0, sc=8) 00:31:51.395 starting I/O failed 00:31:51.395 [2024-11-19 09:49:37.997320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:51.395 [2024-11-19 09:49:37.997665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.395 [2024-11-19 09:49:37.997704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.395 qpair failed and we were unable to recover it. 00:31:51.395 [2024-11-19 09:49:37.998068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.395 [2024-11-19 09:49:37.998080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.395 qpair failed and we were unable to recover it. 00:31:51.395 [2024-11-19 09:49:37.998475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.395 [2024-11-19 09:49:37.998535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.395 qpair failed and we were unable to recover it. 00:31:51.395 [2024-11-19 09:49:37.998913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.395 [2024-11-19 09:49:37.998929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.395 qpair failed and we were unable to recover it. 00:31:51.395 [2024-11-19 09:49:37.999398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.395 [2024-11-19 09:49:37.999457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.395 qpair failed and we were unable to recover it. 00:31:51.395 [2024-11-19 09:49:37.999876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.395 [2024-11-19 09:49:37.999890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.395 qpair failed and we were unable to recover it. 00:31:51.395 [2024-11-19 09:49:38.000370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.395 [2024-11-19 09:49:38.000431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.395 qpair failed and we were unable to recover it. 00:31:51.395 [2024-11-19 09:49:38.000769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.395 [2024-11-19 09:49:38.000783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.395 qpair failed and we were unable to recover it. 00:31:51.395 [2024-11-19 09:49:38.001139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.395 [2024-11-19 09:49:38.001151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.395 qpair failed and we were unable to recover it. 00:31:51.395 [2024-11-19 09:49:38.001515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.395 [2024-11-19 09:49:38.001528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.395 qpair failed and we were unable to recover it. 00:31:51.395 [2024-11-19 09:49:38.001884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.395 [2024-11-19 09:49:38.001896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.395 qpair failed and we were unable to recover it. 00:31:51.395 [2024-11-19 09:49:38.002398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.395 [2024-11-19 09:49:38.002460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.395 qpair failed and we were unable to recover it. 00:31:51.395 [2024-11-19 09:49:38.002867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.395 [2024-11-19 09:49:38.002881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.395 qpair failed and we were unable to recover it. 00:31:51.395 [2024-11-19 09:49:38.003450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.395 [2024-11-19 09:49:38.003511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.395 qpair failed and we were unable to recover it. 00:31:51.395 [2024-11-19 09:49:38.003878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.003892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.004114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.004127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.004429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.004441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.004742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.004754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.004982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.004994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.005234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.005247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.005571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.005583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.005881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.005893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.006252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.006271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.006598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.006610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.006921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.006932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.007267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.007279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.007634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.007645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.007975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.007987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.008300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.008313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.008619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.008631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.008932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.008945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.009281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.009294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.009623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.009636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.009997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.010010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.010227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.010239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.010606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.010617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.010970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.010983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.011326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.011339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.011571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.011586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.011811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.011822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.012124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.012135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.012476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.012491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.012798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.012810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.013124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.013135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.013512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.013525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.013864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.013876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.014200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.014212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.014534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.014544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.014866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.014876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.015189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.015200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.015523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.396 [2024-11-19 09:49:38.015541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.396 qpair failed and we were unable to recover it. 00:31:51.396 [2024-11-19 09:49:38.015881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.015891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.016207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.016218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.016583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.016593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.016990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.017000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.017384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.017395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.017689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.017700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.018077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.018088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.018433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.018444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.018761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.018771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.019080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.019090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.019459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.019473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.019792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.019807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.020055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.020068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.020386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.020400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.020752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.020764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.021064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.021076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.021384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.021398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.021719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.021731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.022051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.022065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.022387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.022401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.022682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.022695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.022999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.023013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.023325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.023338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.023522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.023538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.023863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.023876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.024085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.024097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.024310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.024323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.024617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.024631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.024939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.024951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.025273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.025287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.025602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.025614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.025933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.025945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.397 qpair failed and we were unable to recover it. 00:31:51.397 [2024-11-19 09:49:38.026376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.397 [2024-11-19 09:49:38.026389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.026731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.026744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.027055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.027068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.027388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.027401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.027719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.027732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.028090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.028104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.028367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.028380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.028696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.028709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.028931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.028945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.029297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.029310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.029627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.029639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.029962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.029975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.030308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.030321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.030660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.030676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.030972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.030988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.031322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.031339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.031665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.031682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.032008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.032025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.032368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.032386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.032700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.032721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.033040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.033056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.033446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.033463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.033796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.033814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.034150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.034183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.034581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.034598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.035026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.035043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.035301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.035318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.035665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.035682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.036001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.036018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.036334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.036351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.036758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.036775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.037140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.037156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.037490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.037506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.037830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.037847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.038173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.038190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.038508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.038525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.038764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.038781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.398 qpair failed and we were unable to recover it. 00:31:51.398 [2024-11-19 09:49:38.039077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.398 [2024-11-19 09:49:38.039095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.039445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.039464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.039787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.039804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.040126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.040143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.040518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.040537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.040826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.040843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.041062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.041081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.041425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.041448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.041793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.041815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.042194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.042218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.042572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.042593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.042942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.042964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.043307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.043329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.043655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.043684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.043911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.043934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.044273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.044295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.044642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.044663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.044993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.045015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.045423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.045445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.045756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.045777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.046148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.046186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.046520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.046547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.046926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.046952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.047305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.047328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.047713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.047733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.048056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.048076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.048410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.048431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.048723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.048743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.049088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.049110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.049450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.049471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.049818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.049840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.050190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.050214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.050466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.050488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.050810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.050831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.051219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.051241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.051564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.051584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.051923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.051944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.399 [2024-11-19 09:49:38.052272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.399 [2024-11-19 09:49:38.052294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.399 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.052642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.052663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.053023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.053044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.053351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.053373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.053698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.053728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.054073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.054101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.054466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.054497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.054853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.054882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.055253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.055284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.055652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.055680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.056046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.056074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.056436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.056464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.056802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.056832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.057083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.057112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.057494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.057523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.057897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.057927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.058273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.058303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.058662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.058691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.059029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.059057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.059405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.059436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.059765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.059793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.060154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.060191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.060549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.060578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.060936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.060966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.061314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.061343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.061714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.061748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.062190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.062221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.062603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.062632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.062992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.063022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.063396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.063427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.063774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.400 [2024-11-19 09:49:38.063804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.400 qpair failed and we were unable to recover it. 00:31:51.400 [2024-11-19 09:49:38.064177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.064208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.064557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.064586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.064927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.064957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.065304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.065334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.065678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.065708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.066091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.066119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.066594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.066624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.066970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.066998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.067250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.067280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.067611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.067639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.067990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.068018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.068427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.068456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.068814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.068842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.069178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.069207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.069519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.069546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.069918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.069946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.070301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.070334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.070690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.070719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.071081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.071109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.071463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.071493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.071832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.071861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.072134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.072173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.072538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.072567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.072916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.072945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.073306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.073336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.073708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.073737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.074075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.074103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.074455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.074484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.074922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.074951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.075321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.075351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.075734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.075762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.076096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.076124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.076391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.076421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.076790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.076818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.077145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.077190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.077555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.077584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.077949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.077978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.401 [2024-11-19 09:49:38.078338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.401 [2024-11-19 09:49:38.078368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.401 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.078710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.078739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.079100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.079130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.079492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.079521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.079825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.079861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.080234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.080264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.080519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.080547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.080924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.080952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.081310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.081340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.081570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.081601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.082000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.082028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.082395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.082426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.082791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.082820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.083150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.083189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.083523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.083551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.083912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.083940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.084279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.084308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.084669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.084698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.085057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.085085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.085424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.085453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.085806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.085834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.086192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.086224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.086577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.086605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.086970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.087000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.087353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.087389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.087748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.087776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.088135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.088170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.088521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.088550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.088906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.088934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.089308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.089338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.089770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.089799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.090180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.090211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.090568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.090596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.090971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.090999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.091371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.091402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.091762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.091792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.092199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.092228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.092580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.092608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.402 qpair failed and we were unable to recover it. 00:31:51.402 [2024-11-19 09:49:38.092978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.402 [2024-11-19 09:49:38.093008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.093340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.093369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.093709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.093738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.094092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.094120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.094477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.094507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.094841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.094869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.095221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.095252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.095587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.095616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.095975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.096004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.096344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.096373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.096730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.096758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.097004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.097032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.097385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.097414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.097770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.097799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.098149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.098193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.098479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.098507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.098874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.098903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.099293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.099324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.099690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.099719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.100096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.100124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.100472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.100502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.100889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.100918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.101179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.101210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.101547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.101576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.102019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.102046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.102289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.102318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.102696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.102730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.103083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.103111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.103457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.103487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.103823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.103851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.104216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.104247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.104625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.104653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.105011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.105040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.105410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.105439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.105778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.105806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.106176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.106206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.106556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.106583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.106934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.403 [2024-11-19 09:49:38.106961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.403 qpair failed and we were unable to recover it. 00:31:51.403 [2024-11-19 09:49:38.107323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.107354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.107729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.107757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.108122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.108150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.108490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.108518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.108880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.108909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.109279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.109308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.109578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.109605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.110061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.110089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.110364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.110393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.110746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.110774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.111136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.111172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.111529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.111558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.111934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.111962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.112321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.112351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.112706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.112734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.113097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.113126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.113509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.113539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.113911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.113939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.114294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.114325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.114657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.114685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.115048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.115077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.115343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.115373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.115737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.115766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.116135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.116170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.116521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.116550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.116899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.116927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.117276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.117306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.117653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.117683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.118046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.118081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.118425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.118455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.118821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.118851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.119234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.119264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.404 [2024-11-19 09:49:38.119501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.404 [2024-11-19 09:49:38.119532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.404 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.119898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.119927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.120305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.120336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.120685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.120713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.121126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.121155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.121513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.121543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.121902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.121931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.122279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.122310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.122679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.122707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.123137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.123174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.123580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.123609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.123955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.123984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.124342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.124371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.124713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.124742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.125107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.125135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.125502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.125531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.125867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.125897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.126260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.126290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.126647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.126676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.127004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.127034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.127404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.127434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.127794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.127823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.128166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.128195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.128599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.128629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.128986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.129015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.129397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.129426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.129808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.129836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.130192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.130223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.130578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.130606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.130969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.130999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.131341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.131371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.131748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.131776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.132110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.132138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.405 [2024-11-19 09:49:38.132472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.405 [2024-11-19 09:49:38.132501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.405 qpair failed and we were unable to recover it. 00:31:51.679 [2024-11-19 09:49:38.132862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.679 [2024-11-19 09:49:38.132894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.679 qpair failed and we were unable to recover it. 00:31:51.679 [2024-11-19 09:49:38.133233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.679 [2024-11-19 09:49:38.133262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.679 qpair failed and we were unable to recover it. 00:31:51.679 [2024-11-19 09:49:38.133623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.679 [2024-11-19 09:49:38.133658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.679 qpair failed and we were unable to recover it. 00:31:51.679 [2024-11-19 09:49:38.134064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.679 [2024-11-19 09:49:38.134093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.679 qpair failed and we were unable to recover it. 00:31:51.679 [2024-11-19 09:49:38.134442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.679 [2024-11-19 09:49:38.134479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.679 qpair failed and we were unable to recover it. 00:31:51.679 [2024-11-19 09:49:38.134909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.679 [2024-11-19 09:49:38.134939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.679 qpair failed and we were unable to recover it. 00:31:51.679 [2024-11-19 09:49:38.135297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.679 [2024-11-19 09:49:38.135327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.679 qpair failed and we were unable to recover it. 00:31:51.679 [2024-11-19 09:49:38.135683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.679 [2024-11-19 09:49:38.135711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.679 qpair failed and we were unable to recover it. 00:31:51.679 [2024-11-19 09:49:38.136074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.679 [2024-11-19 09:49:38.136101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.679 qpair failed and we were unable to recover it. 00:31:51.679 [2024-11-19 09:49:38.136464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.679 [2024-11-19 09:49:38.136496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.679 qpair failed and we were unable to recover it. 00:31:51.679 [2024-11-19 09:49:38.136907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.679 [2024-11-19 09:49:38.136935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.679 qpair failed and we were unable to recover it. 00:31:51.679 [2024-11-19 09:49:38.137265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.679 [2024-11-19 09:49:38.137296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.679 qpair failed and we were unable to recover it. 00:31:51.679 [2024-11-19 09:49:38.137584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.679 [2024-11-19 09:49:38.137612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.679 qpair failed and we were unable to recover it. 00:31:51.679 [2024-11-19 09:49:38.137990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.679 [2024-11-19 09:49:38.138019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.679 qpair failed and we were unable to recover it. 00:31:51.679 [2024-11-19 09:49:38.138397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.679 [2024-11-19 09:49:38.138426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.679 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.138756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.138784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.139149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.139187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.139558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.139586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.139924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.139954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.140287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.140318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.140567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.140594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.140942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.140970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.141310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.141340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.141675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.141705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.142046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.142074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.142457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.142487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.142844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.142872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.143239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.143269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.143635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.143663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.143997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.144026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.144426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.144456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.144817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.144846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.145222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.145252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.145616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.145646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.146016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.146044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.146428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.146459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.146796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.146824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.147202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.147231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.147628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.147656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.147993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.148021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.148275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.148304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.148674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.148702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.148968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.149002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.149342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.149372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.149750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.149779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.150112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.150139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.150537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.150567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.150918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.150947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.151290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.151320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.151681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.151709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.151949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.151981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.152330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.680 [2024-11-19 09:49:38.152359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.680 qpair failed and we were unable to recover it. 00:31:51.680 [2024-11-19 09:49:38.152737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.152766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.153021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.153049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.153432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.153461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.153817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.153844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.154205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.154235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.154601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.154630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.154991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.155020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.155379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.155410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.155748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.155776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.156035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.156067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.156436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.156466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.156830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.156859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.157218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.157247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.157502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.157530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.157884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.157912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.158272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.158302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.158666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.158694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.159043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.159073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.159419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.159450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.159711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.159739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.160070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.160099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.160457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.160487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.160875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.160904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.161234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.161263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.161628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.161658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.162023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.162051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.162385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.162415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.162783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.162811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.163181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.163212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.163567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.163595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.163893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.163927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.164293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.164324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.164664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.164694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.165038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.165066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.165405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.165435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.165784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.165812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.166215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.166245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.681 [2024-11-19 09:49:38.166598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.681 [2024-11-19 09:49:38.166625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.681 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.167000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.167028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.167373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.167404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.167768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.167797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.168168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.168199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.168559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.168588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.168961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.168990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.169326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.169356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.169713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.169742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.170100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.170128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.170509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.170539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.170900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.170928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.171312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.171341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.171697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.171725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.172091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.172119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.172585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.172614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.172972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.173000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.173338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.173368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.173710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.173737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.174102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.174130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.174524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.174553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.174978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.175007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.175374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.175403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.175767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.175797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.176146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.176184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.176507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.176536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.176907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.176937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.177293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.177323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.177588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.177619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.177882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.177911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.178300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.178330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.178658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.178686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.179115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.179144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.179555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.179591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.179973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.180004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.180381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.180410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.180643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.180674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.181085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.682 [2024-11-19 09:49:38.181115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.682 qpair failed and we were unable to recover it. 00:31:51.682 [2024-11-19 09:49:38.181489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.181520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.181854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.181882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.182194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.182224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.182568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.182597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.182967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.182995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.183381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.183410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.183644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.183675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.184082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.184111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.184453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.184482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.184729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.184758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.185107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.185136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.185478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.185508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.185873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.185902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.186263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.186293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.186651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.186680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.187050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.187078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.187419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.187451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.187786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.187814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.188178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.188208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.188566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.188594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.188976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.189005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.189260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.189290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.189658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.189688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.189943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.189970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.190338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.190367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.190733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.190762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.191121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.191149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.191509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.191537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.191876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.191905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.192261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.192292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.192530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.192562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.192917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.192945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.193317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.193348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.193692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.193720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.194148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.194192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.194555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.194593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.194995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.195028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.683 [2024-11-19 09:49:38.195390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.683 [2024-11-19 09:49:38.195425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.683 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.195790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.195820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.196148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.196192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.196547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.196576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.196919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.196947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.197308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.197340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.197567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.197599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.197944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.197974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.198320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.198351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.198695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.198724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.199095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.199123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.199387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.199416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.199775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.199803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.200058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.200086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.200535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.200566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.200885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.200913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.201235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.201267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.201615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.201644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.202007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.202037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.202287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.202319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.202698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.202728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.203054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.203082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.203466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.203496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.203857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.203885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.204248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.204278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.204647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.204676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.205040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.205070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.205440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.205471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.205836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.205866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.206223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.206253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.206615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.206643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.207005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.207035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.684 [2024-11-19 09:49:38.207375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.684 [2024-11-19 09:49:38.207405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.684 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.207771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.207800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.208135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.208170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.208530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.208559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.208918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.208947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.209298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.209328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.209694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.209729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.210092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.210120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.210369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.210399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.210760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.210788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.211123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.211151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.211522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.211552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.211911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.211939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.212307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.212336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.212684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.212712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.213080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.213109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.213473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.213504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.213891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.213919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.214264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.214294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.214652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.214681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.215043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.215071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.215438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.215468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.215814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.215843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.216206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.216236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.216651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.216679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.217036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.217066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.217400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.217429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.217790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.217819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.218190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.218221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.218592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.218620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.218970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.218999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.219386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.219415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.219773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.219801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.220152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.220189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.220555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.220583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.220952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.220981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.221192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.221223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.685 [2024-11-19 09:49:38.221673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.685 [2024-11-19 09:49:38.221702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.685 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.222037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.222067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.222418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.222449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.222809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.222838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.223191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.223220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.223574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.223602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.223964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.223993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.224341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.224371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.224734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.224762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.225119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.225155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.225526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.225558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.225815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.225843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.226196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.226226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.226576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.226606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.226965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.226994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.227226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.227259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.227617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.227645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.227888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.227919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.228283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.228314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.228676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.228706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.229081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.229110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.229454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.229483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.229909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.229937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.230276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.230308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.230667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.230696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.231043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.231075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.231436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.231466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.231807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.231837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.232214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.232245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.232608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.232636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.232942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.232970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.233335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.233364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.233738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.233767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.234112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.234140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.234502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.234533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.234872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.234901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.235260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.235292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.235640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.235668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.686 qpair failed and we were unable to recover it. 00:31:51.686 [2024-11-19 09:49:38.236027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.686 [2024-11-19 09:49:38.236055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.236432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.236462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.236830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.236859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.237200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.237229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.237582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.237610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.237962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.237991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.238340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.238369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.238711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.238740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.239108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.239136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.239521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.239551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.239914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.239943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.240346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.240384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.240755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.240787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.241148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.241199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.241573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.241604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.241960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.241987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.242337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.242367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.242710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.242737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.243107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.243135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.243473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.243503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.243869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.243897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.244252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.244285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.244656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.244685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.245026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.245056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.245388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.245417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.245777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.245806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.246180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.246209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.246571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.246599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.246960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.246988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.247346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.247378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.247735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.247765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.248146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.248185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.248565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.248595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.248974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.249004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.249339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.249370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.249763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.249791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.250009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.250037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.250421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.687 [2024-11-19 09:49:38.250451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.687 qpair failed and we were unable to recover it. 00:31:51.687 [2024-11-19 09:49:38.250813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.250842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.251228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.251258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.251637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.251667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.252065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.252094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.252448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.252478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.252731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.252759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.253145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.253193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.253580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.253610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.253961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.253991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.254345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.254376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.254739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.254768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.255014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.255043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.255393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.255422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.255780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.255815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.256044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.256075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.256336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.256365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.256696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.256725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.257105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.257135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.257502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.257532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.257895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.257923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.258173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.258204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.258547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.258577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.258943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.258973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.259317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.259347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.259737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.259766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.260140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.260176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.260543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.260571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.260933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.260965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.261327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.261359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.261725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.261755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.262174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.262206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.262586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.262617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.262951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.688 [2024-11-19 09:49:38.262983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.688 qpair failed and we were unable to recover it. 00:31:51.688 [2024-11-19 09:49:38.263440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.263472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.263851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.263882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.264278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.264308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.264715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.264745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.265086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.265117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.265509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.265540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.265893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.265923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.266274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.266308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.266658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.266688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.266903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.266932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.267312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.267343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.267674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.267703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.268052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.268081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.268447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.268479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.268851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.268880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.269270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.269300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.269551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.269579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.269925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.269954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.270303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.270332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.270730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.270761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.271123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.271169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.271517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.271547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.271929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.271960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.272325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.272357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.272703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.272731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.273110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.273139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.273382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.273411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.273792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.273820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.274086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.274114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.274553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.274589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.274976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.275006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.275363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.275394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.275644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.275673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.275954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.275985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.276458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.276490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.276825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.276853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.277206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.689 [2024-11-19 09:49:38.277238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.689 qpair failed and we were unable to recover it. 00:31:51.689 [2024-11-19 09:49:38.277641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.277671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.278021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.278053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.278337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.278368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.278782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.278812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.279180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.279211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.279564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.279594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.279889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.279923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.280248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.280279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.280628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.280658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.281017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.281048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.281393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.281431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.281805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.281835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.282202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.282231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.282485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.282517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.282862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.282892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.283255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.283285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.283565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.283595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.283945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.283975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.284186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.284217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.284493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.284523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.284884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.284912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.285148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.285187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.285510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.285540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.285886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.285916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.286284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.286316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.286700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.286729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.286959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.286989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.287389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.287425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.287825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.287855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.288208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.288240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.288644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.288674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.289033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.289062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.289471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.289500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.289872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.289901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.290148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.290185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.290515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.290544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.290885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.290913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.291179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.690 [2024-11-19 09:49:38.291210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.690 qpair failed and we were unable to recover it. 00:31:51.690 [2024-11-19 09:49:38.291571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.291600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.291947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.291976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.292313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.292345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.292592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.292621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.292842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.292877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.293142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.293181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.293544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.293573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.293801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.293831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.294177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.294208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.294452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.294481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.294862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.294892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.295245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.295274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.295650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.295690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.295979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.296008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.296360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.296390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.296836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.296865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.297125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.297154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.297380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.297410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.297647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.297675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.298029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.298059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.298348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.298381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.298797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.298826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.299184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.299213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.299620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.299649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.300003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.300032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.300383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.300415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.300789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.300818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.301183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.301215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.301593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.301623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.301849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.301878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.302168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.302198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.302554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.302583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.302830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.302862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.303312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.303342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.303726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.303755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.304104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.304134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.304411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.304449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.691 [2024-11-19 09:49:38.304819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.691 [2024-11-19 09:49:38.304849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.691 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.305218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.305249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.305625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.305658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.305905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.305934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.306165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.306195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.306553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.306584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.306924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.306954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.307396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.307427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.307769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.307807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.308051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.308079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.308307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.308338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.308698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.308727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.309078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.309111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.309497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.309528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.309886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.309915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.310291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.310329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.310691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.310719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.311081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.311110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.311462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.311492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.311822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.311851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.312214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.312245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.312664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.312696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.313057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.313086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.313334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.313364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.313750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.313780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.314135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.314197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.314520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.314549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.314807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.314835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.315180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.315210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.315575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.315604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.315967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.315996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.316379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.316410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.316766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.316795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.317030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.317060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.317402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.317436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.317770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.317798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.318169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.318202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.318441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.692 [2024-11-19 09:49:38.318470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.692 qpair failed and we were unable to recover it. 00:31:51.692 [2024-11-19 09:49:38.318816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.318845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.319208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.319239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.319621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.319650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.320002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.320029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.320288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.320318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.320690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.320719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.321087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.321116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.321466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.321495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.321867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.321897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.322255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.322284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.322663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.322691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.323058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.323086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.323453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.323483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.323820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.323849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.324153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.324192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.324540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.324568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.324930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.324959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.325299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.325335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.325684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.325716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.326079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.326109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.326340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.326370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.326609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.326641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.327009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.327038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.327388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.327419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.327784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.327812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.328055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.328083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.328458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.328488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.328851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.328880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.329356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.329388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.329669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.329698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.330057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.330086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.330452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.330483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.330854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.330884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.693 [2024-11-19 09:49:38.331257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.693 [2024-11-19 09:49:38.331287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.693 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.331656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.331685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.332055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.332084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.332323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.332352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.332711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.332740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.333120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.333150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.333501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.333531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.333896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.333925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.334287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.334318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.334659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.334688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.335034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.335064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.335422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.335452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.335853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.335881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.336251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.336280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.336627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.336656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.336897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.336928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.337279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.337309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.337674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.337704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.338077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.338106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.338483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.338513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.338952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.338981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.339342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.339372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.339733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.339762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.340180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.340210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.340547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.340583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.340934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.340963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.341325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.341357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.341717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.341746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.341968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.342000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.342390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.342420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.342765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.342795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.343174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.343204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.343542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.343570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.343898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.343927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.344300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.344332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.344682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.344711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.344983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.345011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.345415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.345446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.694 qpair failed and we were unable to recover it. 00:31:51.694 [2024-11-19 09:49:38.345699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.694 [2024-11-19 09:49:38.345729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.346175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.346206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.346594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.346622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.346989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.347018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.347388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.347417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.347769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.347797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.348175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.348206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.348586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.348615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.348991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.349019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.349401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.349432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.349794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.349823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.350188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.350217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.350589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.350619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.350964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.350994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.351355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.351385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.351746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.351775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.352129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.352165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.352550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.352579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.352961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.352990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.353364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.353395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.353646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.353676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.354026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.354055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.354401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.354432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.354759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.354787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.355148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.355188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.355427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.355456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.355788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.355824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.356214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.356244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.356499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.356527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.356782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.356811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.357149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.357189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.357428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.357456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.357803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.357833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.358200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.358230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.358604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.358634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.358986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.359014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.359366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.359396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.359648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.695 [2024-11-19 09:49:38.359677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.695 qpair failed and we were unable to recover it. 00:31:51.695 [2024-11-19 09:49:38.360034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.360063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.360433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.360464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.360832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.360861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.361230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.361261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.361605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.361633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.361997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.362027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.362393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.362424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.362779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.362807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.363048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.363076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.363441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.363471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.363793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.363823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.364198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.364228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.364598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.364626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.364995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.365023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.365287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.365317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.365698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.365727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.366097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.366127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.366501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.366532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.366880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.366909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.367258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.367288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.367560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.367588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.367948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.367977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.368340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.368369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.368712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.368741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.369107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.369136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.369437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.369466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.369835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.369863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.370202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.370231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.370577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.370616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.370838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.370868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.371119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.371152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.371544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.371575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.371908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.371937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.372295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.372326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.372587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.372615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.372959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.372989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.373354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.696 [2024-11-19 09:49:38.373384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.696 qpair failed and we were unable to recover it. 00:31:51.696 [2024-11-19 09:49:38.373739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.373769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.374135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.374179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.374506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.374535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.374895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.374924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.375101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.375129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.375491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.375522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.375865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.375894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.376269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.376298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.376664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.376692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.377068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.377095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.377446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.377476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.377846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.377876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.378273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.378303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.378558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.378590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.378957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.378985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.379399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.379428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.379859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.379889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.380251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.380281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.380652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.380682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.381051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.381079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.381415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.381446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.381814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.381844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.382206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.382236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.382608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.382638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.383012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.383040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.383173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.383205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.383597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.383627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.383994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.384024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.384407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.384438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.384796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.384825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.385185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.385214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.385567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.385602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.385958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.385986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.386243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.386273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.386657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.386687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.387048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.387077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.697 [2024-11-19 09:49:38.387418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.697 [2024-11-19 09:49:38.387448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.697 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.387811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.387840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.388201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.388230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.388610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.388638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.388996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.389027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.389300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.389330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.389705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.389735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.390060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.390089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.390448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.390479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.390842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.390872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.391230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.391261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.391632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.391662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.392026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.392055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.392414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.392445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.392803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.392833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.393179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.393213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.393591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.393620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.393974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.394002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.394353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.394383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.394725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.394754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.395084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.395113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.395478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.395508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.395769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.395799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.396182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.396213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.396581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.396610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.396972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.397000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.397422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.397451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.397883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.397913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.398271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.398300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.398672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.398702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.399065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.399094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.399449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.399480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.399844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.399872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.400126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.400156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.698 [2024-11-19 09:49:38.400552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.698 [2024-11-19 09:49:38.400581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.698 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.400951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.400986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.401353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.401382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.401741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.401769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.402132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.402181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.402518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.402547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.402895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.402923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.403233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.403262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.403641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.403670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.404049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.404079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.404461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.404491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.404832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.404863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.405214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.405244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.407185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.407253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.407666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.407702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.407957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.407990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.408343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.408373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.408742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.408771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.409213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.409245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.409612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.409642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.410007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.410035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.699 [2024-11-19 09:49:38.410405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.699 [2024-11-19 09:49:38.410434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.699 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.410773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.410802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.411156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.411197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.411556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.411585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.411950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.411980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.412327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.412356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.412699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.412727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.413091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.413120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.413484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.413514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.413884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.413912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.414280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.414311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.414561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.414590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.414944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.414974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.415350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.415380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.415625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.415653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.415874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.415903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.416282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.416311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.416681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.416710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.417063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.417092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.417341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.417374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.417791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.417826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.418193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.418225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.418575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.418604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.418861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.418890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.419321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.419352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.419709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.419737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.420097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.420124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.975 [2024-11-19 09:49:38.420499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.975 [2024-11-19 09:49:38.420529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.975 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.420890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.420919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.421153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.421190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.421554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.421583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.421938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.421967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.422333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.422366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.422728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.422756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.423117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.423146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.423543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.423573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.423933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.423962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.424312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.424342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.424699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.424729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.424975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.425003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.425281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.425311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.425656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.425686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.426036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.426067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.426427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.426457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.426826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.426855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.427221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.427250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.427594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.427623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.427983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.428013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.428394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.428424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.428777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.428805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.429143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.429182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.429531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.429559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.429924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.429954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.430319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.430349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.430711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.430739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.431112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.431141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.431510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.431539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.431896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.431925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.432290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.432322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.432680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.432710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.433075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.433110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.433470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.433499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.433757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.433786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.434139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.434184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.434546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.976 [2024-11-19 09:49:38.434576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.976 qpair failed and we were unable to recover it. 00:31:51.976 [2024-11-19 09:49:38.434834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.434862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.435251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.435281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.435649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.435677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.436034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.436062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.436424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.436454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.436797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.436828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.437190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.437220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.437569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.437599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.437963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.437992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.438356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.438387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.438749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.438778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.439138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.439176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.439419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.439447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.439806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.439835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.440190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.440219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.440580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.440609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.440978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.441007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.441392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.441421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.441816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.441846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.442200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.442230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.442525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.442553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.442914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.442943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.443300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.443331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.443668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.443697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.444073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.444105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.444450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.444480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.444833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.444862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.445211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.445241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.445666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.445695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.446046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.446076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.446428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.446458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.446813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.446843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.447099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.447128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.447479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.447510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.447865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.447894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.448264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.448300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.448581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.448609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.977 [2024-11-19 09:49:38.448979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.977 [2024-11-19 09:49:38.449007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.977 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.449411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.449443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.449824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.449853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.450223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.450252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.450503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.450531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.450917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.450946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.451295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.451325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.451563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.451596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.451883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.451912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.452178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.452208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.452568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.452598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.452928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.452957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.453317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.453349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.453706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.453736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.454092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.454123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.454512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.454542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.454908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.454937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.455306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.455336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.455697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.455726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.456094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.456124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.456483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.456513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.456886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.456915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.457180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.457210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.457554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.457583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.458029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.458059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.458311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.458342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.458700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.458729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.459086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.459115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.459493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.459522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.459788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.459818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.460174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.460205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.460467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.460497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.460837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.460866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.461267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.461298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.461669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.461697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.462068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.462096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.462525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.462554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.462807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.978 [2024-11-19 09:49:38.462837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.978 qpair failed and we were unable to recover it. 00:31:51.978 [2024-11-19 09:49:38.463184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.463221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.463591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.463619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.463977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.464005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.464236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.464266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.464627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.464655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.465024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.465055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.465413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.465445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.465776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.465806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.466167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.466200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.466592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.466620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.466950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.466981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.467339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.467370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.467708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.467737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.468077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.468106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.468482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.468512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.468858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.468886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.469127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.469155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.469524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.469556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.469919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.469948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.470215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.470244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.470601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.470630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.470961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.470989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.471331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.471360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.471623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.471653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.472001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.472031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.472417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.472447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.472810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.472837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.473240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.473274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.473648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.473678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.474051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.474080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.474464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.474494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.474916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.979 [2024-11-19 09:49:38.474944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.979 qpair failed and we were unable to recover it. 00:31:51.979 [2024-11-19 09:49:38.475305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.475334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.475715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.475744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.475995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.476027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.476389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.476420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.476778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.476807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.477173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.477203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.477501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.477530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.477890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.477918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.478294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.478324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.478704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.478734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.479096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.479125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.479493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.479523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.479883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.479913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.480282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.480311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.480679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.480708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.481078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.481107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.481482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.481512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.481868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.481897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.482237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.482266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.482424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.482451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.482793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.482822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.483183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.483214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.483604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.483633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.483998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.484027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.484388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.484419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.484776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.484806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.485176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.485207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.485566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.485595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.485841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.485869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.486248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.486279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.486658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.486686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.487044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.487074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.487451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.487480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.487838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.487867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.488227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.488259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.488632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.488666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.489017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.489047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.980 qpair failed and we were unable to recover it. 00:31:51.980 [2024-11-19 09:49:38.489438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.980 [2024-11-19 09:49:38.489469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.489729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.489757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.490018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.490049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.490413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.490443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.490801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.490830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.491202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.491231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.491589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.491617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.491863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.491895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.492253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.492284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.492656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.492684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.493046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.493074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.493448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.493478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.493838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.493868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.494231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.494261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.494613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.494642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.494881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.494912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.495266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.495297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.495703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.495732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.496128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.496156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.496448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.496478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.496708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.496741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.497096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.497126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.497385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.497415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.497758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.497787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.498127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.498156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.498523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.498552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.498824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.498854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.499216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.499246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.499594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.499622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.500029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.500058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.500300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.500330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.500697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.500725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.501091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.501122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.501483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.501513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.501866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.501895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.502265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.502294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.502629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.502659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.503089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.981 [2024-11-19 09:49:38.503118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.981 qpair failed and we were unable to recover it. 00:31:51.981 [2024-11-19 09:49:38.503478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.503515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.503870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.503899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.504338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.504367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.504699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.504727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.505080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.505109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.505486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.505516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.505875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.505904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.506151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.506193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.506534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.506572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.506920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.506949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.507330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.507360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.507729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.507758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.508118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.508147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.508322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.508351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.508733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.508763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.509207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.509238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.509589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.509618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.509867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.509898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.510249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.510280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.510663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.510692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.511053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.511082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.511454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.511484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.511854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.511884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.512134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.512171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.512514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.512542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.512905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.512934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.513195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.513225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.513616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.513646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.514023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.514051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.514420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.514451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.514820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.514850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.515226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.515256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.515619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.515649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.515907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.515936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.982 qpair failed and we were unable to recover it. 00:31:51.982 [2024-11-19 09:49:38.516282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.982 [2024-11-19 09:49:38.516311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.516681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.516710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.516963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.516993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.517234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.517264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.517620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.517649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.517892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.517923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.518340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.518377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.518704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.518732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.518962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.518992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.519371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.519401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.519661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.519691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.519945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.519972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.520332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.520362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.520732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.520761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.521125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.521155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.521445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.521476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.521836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.521866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.522243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.522274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.522637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.522666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.523115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.523145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.523527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.523558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.523973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.524002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.524382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.524412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.524766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.524796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.525060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.525090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.525439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.525468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.525835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.525866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.526235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.526266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.526533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.526562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.526962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.526992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.527345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.527376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.527739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.527768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.528165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.528196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.528558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.528597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.528933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.528962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.529337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.529366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.529735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.529764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.983 [2024-11-19 09:49:38.530136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.983 [2024-11-19 09:49:38.530176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.983 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.530520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.530549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.530915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.530944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.531290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.531329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.531666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.531695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.532061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.532090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.532469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.532499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.532754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.532783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.533086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.533115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.533510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.533546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.533787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.533816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.534051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.534080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.534310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.534340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.534564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.534591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.534940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.534971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.535341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.535372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.535793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.535821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.536178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.536208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.536559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.536589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.536920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.536960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.537182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.537213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.537590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.537619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.537988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.538016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.538388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.538418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.538778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.538809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.539200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.539232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.539625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.539654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.540081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.540109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.540478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.540507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.540759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.540788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.541063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.541091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.541326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.541356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.984 qpair failed and we were unable to recover it. 00:31:51.984 [2024-11-19 09:49:38.541723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.984 [2024-11-19 09:49:38.541752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.542116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.542144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.542390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.542419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.542861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.542889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.543218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.543250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.543624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.543653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.544019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.544048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.544409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.544439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.544796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.544825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.545193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.545224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.545581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.545612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.545998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.546028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.546401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.546430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.546662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.546691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.547041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.547071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.547432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.547463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.547710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.547738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.548090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.548126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.548486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.548516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.548772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.548800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.549173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.549203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.549546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.549577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.549910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.549939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.550209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.550238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.550582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.550613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.550970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.550999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.551368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.551398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.551850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.551879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.552245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.552274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.552521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.552550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.552934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.552965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.553311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.553341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.553720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.553749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.554096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.554126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.554489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.554519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.554788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.554816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.555195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.555226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.555600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.985 [2024-11-19 09:49:38.555628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.985 qpair failed and we were unable to recover it. 00:31:51.985 [2024-11-19 09:49:38.556099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.556127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.556516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.556546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.556903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.556931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.557174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.557204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.557605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.557633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.557990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.558020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.558397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.558428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.558672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.558699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.559064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.559092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.559467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.559498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.559858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.559886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.560277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.560307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.560535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.560563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.560926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.560954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.561227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.561257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.561634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.561663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.562020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.562049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.562446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.562477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.562863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.562891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.563248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.563285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.563554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.563582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.563948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.563976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.564323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.564353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.564696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.564725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.565101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.565129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.565486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.565516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.565894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.565923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.566357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.566387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.566746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.566776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.567034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.567063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.567411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.567440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.567789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.567817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.568171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.568203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.568577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.568607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.568953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.568982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.569256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.569286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.986 qpair failed and we were unable to recover it. 00:31:51.986 [2024-11-19 09:49:38.569623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.986 [2024-11-19 09:49:38.569653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.569903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.569931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.570191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.570221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.570618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.570646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.570988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.571018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.571404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.571435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.571783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.571812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.572179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.572210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.572374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.572404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.572767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.572796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.573156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.573196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.573561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.573589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.573962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.573990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.574263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.574292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.574718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.574747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.575098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.575127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.575486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.575516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.575887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.575915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.576303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.576334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.576684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.576717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.576976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.577004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.577341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.577371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.577630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.577661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.578016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.578051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.578420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.578451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.578825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.578854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.579250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.579279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.579666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.579693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.579941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.579970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.580345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.580376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.580769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.580798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.581178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.581209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.581596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.581625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.581877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.581908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.582275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.582305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.582744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.582773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.583096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.583125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.583566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.987 [2024-11-19 09:49:38.583598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.987 qpair failed and we were unable to recover it. 00:31:51.987 [2024-11-19 09:49:38.583966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.583995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.584342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.584373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.584742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.584771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.585137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.585177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.585453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.585481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.585860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.585890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.586126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.586156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.586535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.586564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.586927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.586956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.587227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.587256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.587617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.587646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.588074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.588103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.588465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.588498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.588855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.588886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.589260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.589292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.589667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.589696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.589922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.589950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.590319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.590349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.590783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.590812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.591174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.591204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.591503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.591536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.591890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.591919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.592287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.592316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.592660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.592690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.593048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.593078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.593420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.593459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.593784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.593813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.594186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.594215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.594585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.594613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.594961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.594991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.595332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.595369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.595695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.595724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.596084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.596114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.596400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.596433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.596852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.596881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.597234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.597264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.597628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.597659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.988 [2024-11-19 09:49:38.598026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.988 [2024-11-19 09:49:38.598055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.988 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.598448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.598477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.598841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.598869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.599298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.599328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.599678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.599708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.600045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.600074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.600449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.600481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.600833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.600862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.601217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.601248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.601590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.601620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.601976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.602006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.602338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.602368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.602726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.602754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.603128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.603156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.603539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.603568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.603954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.603984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.604340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.604370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.604746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.604775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.605138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.605181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.605433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.605460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.605828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.605857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.606228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.606259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.606630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.606659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.606996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.607026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.607387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.607419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.607675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.607704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.608083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.608111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.608494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.608525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.608888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.608922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.609280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.609310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.609750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.609779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.610131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.610172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.610538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.610567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.989 [2024-11-19 09:49:38.610921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.989 [2024-11-19 09:49:38.610949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.989 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.611291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.611324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.611591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.611620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.611978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.612007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.612349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.612379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.612614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.612642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.613009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.613038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.613326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.613356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.613725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.613754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.614123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.614153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.614566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.614595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.614951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.614981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.615344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.615374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.615734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.615765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.616133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.616172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.616546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.616575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.616823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.616853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.617206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.617236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.617598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.617626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.617988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.618017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.618398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.618428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.618779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.618808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.619184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.619215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.619462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.619490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.619832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.619863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.620207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.620239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.620499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.620527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.620896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.620925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.621292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.621322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.621680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.621709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.622064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.622093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.622521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.622552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.622905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.622934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.623304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.623333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.623699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.623727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.624058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.624094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.624461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.624491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.990 [2024-11-19 09:49:38.624847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.990 [2024-11-19 09:49:38.624878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.990 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.625103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.625135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.625543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.625572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.625941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.625971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.626335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.626365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.626723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.626753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.627089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.627119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.627482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.627512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.627873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.627902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.628271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.628301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.628676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.628704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.629067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.629097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.629490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.629521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.629873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.629903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.630264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.630294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.630661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.630689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.631062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.631091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.631432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.631462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.631823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.631853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.632281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.632312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.632677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.632706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.633081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.633110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.633475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.633506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.633868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.633897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.634254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.634285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.634650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.634680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.635045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.635074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.635444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.635475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.635841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.635870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.636229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.636261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.636638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.636667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.637031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.637060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.637415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.637445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.637762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.637793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.638142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.638181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.638408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.638439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.638803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.638832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.639191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.639221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.991 [2024-11-19 09:49:38.639580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.991 [2024-11-19 09:49:38.639615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.991 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.639961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.639991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.640355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.640386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.640754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.640783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.641132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.641184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.641559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.641587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.641949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.641978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.642344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.642374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.642740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.642769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.643131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.643172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.643507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.643536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.643895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.643924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.644286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.644317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.644657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.644688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.645049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.645079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.645435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.645465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.645841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.645870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.646218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.646248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.646593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.646622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.647057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.647086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.647417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.647447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.647821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.647850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.648106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.648135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.648514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.648544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.648902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.648932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.649306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.649342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.649696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.649725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.650087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.650117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.650498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.650529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.650892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.650921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.651276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.651306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.651552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.651584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.651945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.651974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.652334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.652365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.652730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.652759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.653125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.653154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.653525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.653554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.653934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.992 [2024-11-19 09:49:38.653963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.992 qpair failed and we were unable to recover it. 00:31:51.992 [2024-11-19 09:49:38.654316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.654347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.654581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.654612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.654972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.655009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.655343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.655374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.655701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.655730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.656116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.656146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.656519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.656548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.656892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.656920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.657303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.657333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.657706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.657733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.658101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.658130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.658497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.658528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.658883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.658911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.659214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.659244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.659607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.659636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.660004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.660034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.660405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.660436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.660810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.660840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.661212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.661242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.661620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.661649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.662005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.662034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.662370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.662400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.662760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.662789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.663037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.663066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.663416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.663447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.663683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.663714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.664064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.664093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.664459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.664489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.664868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.664898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.665270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.665302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.665646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.665676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.666029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.666058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.666398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.993 [2024-11-19 09:49:38.666429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.993 qpair failed and we were unable to recover it. 00:31:51.993 [2024-11-19 09:49:38.666764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.666793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.667156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.667208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.667547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.667576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.667921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.667950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.668309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.668341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.668697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.668725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.668972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.669000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.669347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.669377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.669833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.669861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.670216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.670246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.670624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.670653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.671025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.671053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.671413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.671442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.671802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.671832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.672199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.672231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.672669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.672697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.673059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.673088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.673450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.673479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.673827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.673857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.674225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.674255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.674629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.674657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.675033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.675062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.675430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.675460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.675823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.675854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.676214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.676247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.676598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.676627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.676991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.677019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.677388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.677418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.677770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.677799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.678170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.678199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.678563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.678592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.678827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.678860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.679251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.679281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.679644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.679673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.680065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.680093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.680487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.680518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.680885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.680923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.994 qpair failed and we were unable to recover it. 00:31:51.994 [2024-11-19 09:49:38.681281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.994 [2024-11-19 09:49:38.681312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.681678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.681706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.681964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.681993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.682347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.682377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.682740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.682769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.683114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.683144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.683499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.683528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.683895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.683924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.684312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.684344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.684703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.684732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.685178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.685209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.685651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.685681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.685922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.685951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.686312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.686342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.686706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.686735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.687080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.687109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.687464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.687494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.687859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.687887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.688147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.688200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.688568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.688599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.688970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.688998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.689350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.689381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.689728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.689758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.690121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.690152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.690518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.690546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.690908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.690937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.691301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.691333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.691691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.691719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.692078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.692107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.692476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.692506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.692862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.692891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.693263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.693295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.693651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.693681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.694063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.694092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.694450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.694479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.694832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.694860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.695224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.695254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.695609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.995 [2024-11-19 09:49:38.695639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.995 qpair failed and we were unable to recover it. 00:31:51.995 [2024-11-19 09:49:38.695987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.696015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.696382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.696419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.696772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.696801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.697176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.697207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.697551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.697580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.697928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.697957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.698325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.698356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.698715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.698744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.699106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.699136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.699559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.699589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.699955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.699983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.700322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.700353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.700707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.700736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.701110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.701140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.701529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.701558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.701917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.701947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.702305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.702337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.702699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.702728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.703087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.703115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.703544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.703575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.703945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.703973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.704343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.704374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.704717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.704746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.705102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.705131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:51.996 [2024-11-19 09:49:38.705472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:51.996 [2024-11-19 09:49:38.705501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:51.996 qpair failed and we were unable to recover it. 00:31:52.272 [2024-11-19 09:49:38.705866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.272 [2024-11-19 09:49:38.705898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.272 qpair failed and we were unable to recover it. 00:31:52.272 [2024-11-19 09:49:38.706253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.272 [2024-11-19 09:49:38.706285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.272 qpair failed and we were unable to recover it. 00:31:52.272 [2024-11-19 09:49:38.706645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.272 [2024-11-19 09:49:38.706674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.272 qpair failed and we were unable to recover it. 00:31:52.272 [2024-11-19 09:49:38.707031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.272 [2024-11-19 09:49:38.707060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.272 qpair failed and we were unable to recover it. 00:31:52.272 [2024-11-19 09:49:38.707390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.272 [2024-11-19 09:49:38.707420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.272 qpair failed and we were unable to recover it. 00:31:52.272 [2024-11-19 09:49:38.707778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.272 [2024-11-19 09:49:38.707806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.272 qpair failed and we were unable to recover it. 00:31:52.272 [2024-11-19 09:49:38.708180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.272 [2024-11-19 09:49:38.708210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.272 qpair failed and we were unable to recover it. 00:31:52.272 [2024-11-19 09:49:38.708560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.272 [2024-11-19 09:49:38.708590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.272 qpair failed and we were unable to recover it. 00:31:52.272 [2024-11-19 09:49:38.708920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.272 [2024-11-19 09:49:38.708950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.272 qpair failed and we were unable to recover it. 00:31:52.272 [2024-11-19 09:49:38.709319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.272 [2024-11-19 09:49:38.709349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.272 qpair failed and we were unable to recover it. 00:31:52.272 [2024-11-19 09:49:38.709720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.272 [2024-11-19 09:49:38.709749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.272 qpair failed and we were unable to recover it. 00:31:52.272 [2024-11-19 09:49:38.710098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.272 [2024-11-19 09:49:38.710129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.272 qpair failed and we were unable to recover it. 00:31:52.272 [2024-11-19 09:49:38.710373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.272 [2024-11-19 09:49:38.710408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.272 qpair failed and we were unable to recover it. 00:31:52.272 [2024-11-19 09:49:38.710794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.272 [2024-11-19 09:49:38.710826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.272 qpair failed and we were unable to recover it. 00:31:52.272 [2024-11-19 09:49:38.711185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.272 [2024-11-19 09:49:38.711218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.272 qpair failed and we were unable to recover it. 00:31:52.272 [2024-11-19 09:49:38.711604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.711635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.711993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.712031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.712393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.712426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.712794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.712826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.713184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.713216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.713562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.713592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.713939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.713971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.714264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.714297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.714677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.714708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.715070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.715102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.715374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.715409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.715771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.715802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.716184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.716217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.716572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.716603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.716970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.717003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.717381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.717414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.717833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.717864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.718218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.718252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.718619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.718650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.719026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.719058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.719388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.719420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.719774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.719805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.720173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.720205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.720550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.720581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.720926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.720957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.722937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.723003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.723433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.723472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.723837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.723868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.724290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.724322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.724674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.724707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.725066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.725098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.725463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.725498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.725843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.725875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.726230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.726265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.726621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.726650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.727021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.727053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.727419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.727450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.273 [2024-11-19 09:49:38.727795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.273 [2024-11-19 09:49:38.727827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.273 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.728177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.728211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.728570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.728601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.728958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.728989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.729349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.729387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.729748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.729781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.730146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.730204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.730439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.730472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.730852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.730883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.731256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.731293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.731684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.731715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.732078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.732110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.732509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.732540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.732896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.732929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.733271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.733302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.733675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.733706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.734066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.734098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.734464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.734495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.734855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.734889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.735257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.735290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.735651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.735683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.736042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.736074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.736307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.736341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.736685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.736717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.737077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.737107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.737512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.737546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.737897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.737930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.738281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.738313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.738674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.738706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.739080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.739112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.739512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.739547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.739750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.739784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.740157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.740200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.740544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.740577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.740933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.740964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.741327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.741361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.741582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.741615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.741977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.274 [2024-11-19 09:49:38.742010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.274 qpair failed and we were unable to recover it. 00:31:52.274 [2024-11-19 09:49:38.742375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.742407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.742651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.742684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.743051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.743082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.743436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.743469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.743699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.743732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.744077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.744110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.744504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.744542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.746270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.746333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.746763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.746799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.747202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.747236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.747588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.747620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.747968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.747999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.748370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.748403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.748754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.748786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.749177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.749210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.749459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.749492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.749855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.749887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.750252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.750286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.750642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.750673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.751028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.751061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.751399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.751433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.751798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.751829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.752183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.752215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.752465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.752495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.752935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.752964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.753249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.753280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.753660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.753692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.754049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.754081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.754418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.754450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.754809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.754842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.755206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.755239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.755626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.755659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.756010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.756043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.756497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.756530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.756874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.756908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.757265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.757296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.757669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.757701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.275 [2024-11-19 09:49:38.758057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.275 [2024-11-19 09:49:38.758089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.275 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.758342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.758375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.758726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.758758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.759087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.759118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.759402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.759434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.759810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.759841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.760205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.760239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.760596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.760628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.760987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.761019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.761392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.761430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.761773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.761807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.762204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.762235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.762496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.762529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.762908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.762940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.763297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.763330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.763694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.763725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.764078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.764110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.764475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.764508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.764946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.764977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.765352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.765385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.765763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.765795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.766156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.766198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.766406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.766435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.766799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.766830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.767198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.767233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.767590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.767622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.767849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.767880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.768243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.768274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.768641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.768671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.768989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.769019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.769413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.769446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.769812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.769844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.770205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.770237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.770585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.770615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.770976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.771006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.771376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.771409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.771767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.771800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.772046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.772075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.276 qpair failed and we were unable to recover it. 00:31:52.276 [2024-11-19 09:49:38.772437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.276 [2024-11-19 09:49:38.772470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.772717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.772748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.773116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.773146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.773517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.773549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.773764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.773796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.774182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.774216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.774574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.774605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.774975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.775007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.775394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.775424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.775793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.775823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.776188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.776221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.776623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.776661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.777006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.777037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.777419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.777451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.777819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.777850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.778240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.778272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.778624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.778655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.778903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.778936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.779259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.779291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.779657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.779687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.780064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.780096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.780325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.780358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.780737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.780770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.781130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.781172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.781533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.781565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.781924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.781956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.782394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.782426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.782661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.782692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.783073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.783107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.783477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.783509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.783865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.783898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.277 qpair failed and we were unable to recover it. 00:31:52.277 [2024-11-19 09:49:38.784132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.277 [2024-11-19 09:49:38.784186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.784489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.784522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.784882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.784914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.785282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.785316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.785686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.785717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.786088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.786119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.786483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.786517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.786863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.786895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.787260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.787292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.787660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.787691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.788054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.788086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.788440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.788471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.788836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.788868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.789228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.789259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.789629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.789661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.789891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.789925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.790303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.790336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.790697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.790729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.791110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.791141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.791511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.791543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.791799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.791836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.792208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.792243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.792617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.792647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.793035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.793067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.793433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.793466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.793821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.793853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.794192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.794225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.794586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.794616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.794966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.794996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.795224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.795256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.795578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.795609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.795831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.795861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.796215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.796248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.796630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.796665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.797026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.797057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.797422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.797456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.797776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.797808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.798179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.278 [2024-11-19 09:49:38.798211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.278 qpair failed and we were unable to recover it. 00:31:52.278 [2024-11-19 09:49:38.798568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.798598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.798955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.798987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.799340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.799374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.799731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.799763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.800125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.800181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.800538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.800570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.800913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.800944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.801294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.801327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.801683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.801714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.802052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.802082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.802423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.802454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.802869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.802900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.803243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.803275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.803645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.803676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.804033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.804063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.804402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.804432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.804805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.804835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.805185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.805220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.805568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.805599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.805834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.805863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.806225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.806257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.806631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.806663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.806904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.806944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.807324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.807357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.807605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.807636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.807978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.808009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.808397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.808430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.808792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.808822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.809184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.809217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.809577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.809610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.809973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.810005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.810346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.810378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.810742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.810773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.811115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.811145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.811396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.811430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.811801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.811833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.812192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.812226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.279 qpair failed and we were unable to recover it. 00:31:52.279 [2024-11-19 09:49:38.812584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.279 [2024-11-19 09:49:38.812614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.812968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.813001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.813343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.813376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.813806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.813838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.814187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.814222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.814602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.814633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.814987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.815019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.815392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.815425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.815771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.815804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.816167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.816200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.816566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.816599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.816960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.816990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.817343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.817376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.817775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.817806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.818170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.818201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.818555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.818585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.818948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.818980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.819333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.819365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.819716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.819747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.820105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.820137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.820513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.820546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.820896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.820927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.821290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.821325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.821683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.821714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.822071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.822102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.822366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.822409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.822789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.822819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.823182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.823216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.823565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.823596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.823950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.823981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.824339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.824369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.824730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.824760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.825121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.825152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.825544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.825575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.825928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.825961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.826387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.826419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.826767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.826799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.827145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.280 [2024-11-19 09:49:38.827197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.280 qpair failed and we were unable to recover it. 00:31:52.280 [2024-11-19 09:49:38.827547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.827578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.827934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.827966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.828313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.828345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.828698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.828729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.829095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.829126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.829533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.829564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.829907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.829939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.830299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.830332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.830697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.830729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.831087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.831118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.831437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.831471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.831818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.831850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.832096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.832130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.832515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.832549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.832900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.832931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.833304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.833338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.833705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.833735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.834094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.834126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.834518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.834550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.834914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.834946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.835303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.835336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.835683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.835716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.836058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.836088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.836341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.836373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.836720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.836750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.837116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.837150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.837524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.837556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.837927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.837964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.838206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.838237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.838635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.838665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.839015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.839048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.839405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.281 [2024-11-19 09:49:38.839437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.281 qpair failed and we were unable to recover it. 00:31:52.281 [2024-11-19 09:49:38.839790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.839823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.840179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.840211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.840619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.840651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.841011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.841045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.841378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.841411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.841775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.841807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.842171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.842204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.842554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.842585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.842945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.842976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.843216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.843250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.843623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.843655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.844008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.844040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.844414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.844446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.844804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.844835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.845195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.845228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.845626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.845657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.846015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.846047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.846416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.846447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.846807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.846839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.847067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.847097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.847461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.847493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.847839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.847872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.848094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.848127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.848512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.848546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.848890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.848922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.849334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.849368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.849709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.849741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.850099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.850131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.850277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.850313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.850670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.850701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.851054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.851086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.851442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.851476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.851834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.851867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.852229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.852260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.852627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.852659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.852999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.853037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.853395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.282 [2024-11-19 09:49:38.853429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.282 qpair failed and we were unable to recover it. 00:31:52.282 [2024-11-19 09:49:38.853778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.853809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.854178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.854210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.854566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.854597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.854945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.854977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.855226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.855258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.855609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.855639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.855882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.855914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.856272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.856304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.856667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.856699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.857054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.857085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.857487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.857520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.857867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.857897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.858268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.858302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.858663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.858695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.859050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.859082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.859450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.859484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.859816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.859849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.860194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.860226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.860583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.860613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.860978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.861008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.861349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.861383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.861741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.861773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.862140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.862183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.862533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.862563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.862919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.862949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.863302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.863336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.863725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.863757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.864130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.864175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.864546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.864577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.864927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.864960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.865350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.865383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.865741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.865772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.866134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.866177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.866539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.866570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.866802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.866836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.867195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.867227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.867589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.867621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.283 [2024-11-19 09:49:38.867991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.283 [2024-11-19 09:49:38.868023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.283 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.868394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.868426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.868782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.868814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.869175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.869208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.869570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.869600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.870026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.870058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.870412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.870445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.870796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.870827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.871185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.871219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.871574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.871604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.871972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.872003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.872386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.872417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.872759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.872790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.873118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.873152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.873539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.873571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.873943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.873974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.874339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.874374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.874781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.874811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.875189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.875223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.875583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.875614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.875963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.875994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.876233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.876264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.876623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.876654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.876994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.877024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.877383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.877416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.877768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.877799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.878185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.878217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.878571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.878601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.878962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.878999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.879228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.879260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.879621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.879651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.880016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.880047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.880456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.880490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.880873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.880904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.881265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.881299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.881657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.881688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.882038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.882070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.882423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.882455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.284 qpair failed and we were unable to recover it. 00:31:52.284 [2024-11-19 09:49:38.882825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.284 [2024-11-19 09:49:38.882855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.883226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.883257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.883653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.883684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.884037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.884069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.884481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.884513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.884860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.884892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.885248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.885279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.885648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.885680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.886076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.886109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.886495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.886528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.886881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.886914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.887268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.887301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.887668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.887700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.888039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.888070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.888303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.888334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.888690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.888722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.889081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.889112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.889525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.889559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.889894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.889926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.890279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.890312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.890674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.890706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.891054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.891088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.891438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.891471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.891826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.891859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.892214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.892246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.892479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.892511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.892863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.892896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.893244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.893279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.893638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.893669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.894022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.894053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.894420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.894460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.894816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.894846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.895207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.895241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.895526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.895557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.895912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.895945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.896299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.896332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.896727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.896757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.897112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.285 [2024-11-19 09:49:38.897144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.285 qpair failed and we were unable to recover it. 00:31:52.285 [2024-11-19 09:49:38.897522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.897553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.897909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.897941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.898292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.898324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.898687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.898718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.899072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.899104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.899494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.899526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.899910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.899945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.900277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.900310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.900663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.900696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.901055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.901085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.901443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.901477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.901822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.901852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.902215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.902249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.902616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.902648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.902994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.903025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.903343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.903376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.903738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.903769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.904127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.904174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.904550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.904581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.904947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.904978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.905336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.905368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.905726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.905757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.906007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.906037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.906407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.906440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.906687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.906718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.907098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.907129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.907392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.907427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.907788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.907818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.908247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.908279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.908606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.908636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.908993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.909026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.909395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.909428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.909818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.909855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.286 qpair failed and we were unable to recover it. 00:31:52.286 [2024-11-19 09:49:38.910205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.286 [2024-11-19 09:49:38.910236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.910623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.910654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.911013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.911046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.911414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.911447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.911803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.911835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.912261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.912293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.912620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.912652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.912998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.913029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.913392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.913427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.913783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.913814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.914189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.914221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.914478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.914508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.914860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.914891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.915248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.915283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.915676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.915708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.916062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.916094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.916449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.916481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.916840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.916873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.917231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.917263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.917635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.917667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.918017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.918049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.918428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.918462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.918817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.918847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.919202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.919235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.919619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.919650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.920005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.920037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.920399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.920431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.920791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.920823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.921146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.921190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.921572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.921603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.921961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.921994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.922352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.922384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.922752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.922782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.923182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.923215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.923561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.923592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.923943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.923974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.924225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.924255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.287 [2024-11-19 09:49:38.924501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.287 [2024-11-19 09:49:38.924531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.287 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.924880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.924913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.925248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.925287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.925640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.925673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.926031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.926060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.926315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.926347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.926710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.926741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.927099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.927133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.927505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.927539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.927897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.927929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.928279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.928310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.928555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.928587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.928955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.928986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.929358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.929391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.929762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.929795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.930150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.930195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.930580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.930611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.930974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.931007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.931343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.931377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.931733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.931765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.932117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.932150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.932553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.932585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.932938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.932970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.933330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.933362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.933715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.933747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.934103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.934133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.934504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.934536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.934892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.934925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.935279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.935311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.935674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.935706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.936058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.936089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.936452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.936484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.936820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.936851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.937204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.937236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.937617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.937651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.938011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.938042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.938414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.938447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.938805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.938836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.288 qpair failed and we were unable to recover it. 00:31:52.288 [2024-11-19 09:49:38.939195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.288 [2024-11-19 09:49:38.939229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.939474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.939505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.939856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.939886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.940250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.940282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.940525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.940562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.940805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.940836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.941196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.941228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.941580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.941612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.941965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.941995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.942345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.942378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.942738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.942772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.943123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.943155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.943630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.943662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.944013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.944045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.944411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.944443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.944808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.944838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.945193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.945228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.945582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.945612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.945966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.945998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.946370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.946402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.946636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.946670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.947030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.947062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.947420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.947454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.947819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.947850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.948205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.948239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.948616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.948646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.949004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.949036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.949403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.949435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.949777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.949809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.950031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.950065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.950403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.950435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.950794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.950826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.951184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.951217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.951451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.951484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.951751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.951783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.952131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.952173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.952532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.952564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.952920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.289 [2024-11-19 09:49:38.952951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.289 qpair failed and we were unable to recover it. 00:31:52.289 [2024-11-19 09:49:38.953303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.953335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.953690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.953720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.954118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.954150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.954547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.954581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.954970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.955000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.955393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.955426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.955770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.955806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.956167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.956202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.956556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.956587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.956950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.956982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.957343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.957376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.957730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.957761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.958119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.958150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.958551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.958582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.958947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.958979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.959348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.959380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.959729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.959760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.960121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.960152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.960520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.960552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.960902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.960933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.961283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.961317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.961678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.961709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.961943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.961972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.962250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.962282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.962629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.962659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.963024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.963055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.963411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.963444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.963761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.963791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.964214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.964246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.964597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.964629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.964998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.965030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.965394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.965427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.965773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.965804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.966176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.966209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.966554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.966584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.966938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.966969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.967328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.967362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.290 [2024-11-19 09:49:38.967720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.290 [2024-11-19 09:49:38.967750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.290 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.967979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.968009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.968380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.968412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.968769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.968801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.969152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.969197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.969438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.969472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.969839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.969871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.970233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.970268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.970632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.970662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.971012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.971051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.971414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.971447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.971815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.971846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.972199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.972234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.972586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.972616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.972965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.972996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.973345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.973377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.973749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.973780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.974146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.974190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.974549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.974582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.974943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.974974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.975331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.975363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.975745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.975775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.976130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.976175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.976547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.976578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.976932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.976964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.977327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.977359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.977706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.977737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.978089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.978120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.978478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.978511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.978867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.978898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.979263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.979295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.979653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.979684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.980033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.980065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.291 [2024-11-19 09:49:38.980414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.291 [2024-11-19 09:49:38.980445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.291 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.980810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.980840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.981204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.981239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.981648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.981679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.982029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.982060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.982416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.982447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.982803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.982834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.983200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.983237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.983612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.983644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.984003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.984035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.984398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.984430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.984784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.984817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.985178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.985212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.985644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.985675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.986019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.986051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.986418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.986450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.986800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.986838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.987188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.987220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.987575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.987607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.988008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.988039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.988400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.988432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.988789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.988820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.989197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.989229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.989577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.989609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.989961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.989993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.990341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.990376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.990719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.990749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.991117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.991148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.991540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.991572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.991925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.991958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.992356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.992390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.992749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.992782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.993110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.993141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.993529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.993560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.993914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.993947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.994311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.994343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.994709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.994743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.995105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.292 [2024-11-19 09:49:38.995135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.292 qpair failed and we were unable to recover it. 00:31:52.292 [2024-11-19 09:49:38.995500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:38.995534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:38.995895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:38.995926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:38.996286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:38.996320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:38.996678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:38.996710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:38.997065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:38.997098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:38.997508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:38.997542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:38.997899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:38.997929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:38.998284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:38.998316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:38.998670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:38.998702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:38.999052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:38.999085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:38.999431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:38.999463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:38.999816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:38.999848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:39.000130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:39.000177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:39.000641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:39.000672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:39.001026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:39.001058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:39.001433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:39.001467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:39.001813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:39.001845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:39.002085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:39.002118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.293 [2024-11-19 09:49:39.002473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.293 [2024-11-19 09:49:39.002514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.293 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.002767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.002802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.003174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.003210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.003559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.003590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.003949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.003982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.004332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.004364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.004731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.004762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.005123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.005154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.005558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.005590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.005952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.005985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.006337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.006372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.006615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.006647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.006990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.007021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.007399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.007433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.007792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.007826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.008067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.008099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.008466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.008499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.008924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.008956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.009307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.009340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.569 [2024-11-19 09:49:39.009706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.569 [2024-11-19 09:49:39.009739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.569 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.010095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.010129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.010418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.010451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.010803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.010835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.011194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.011226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.011616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.011648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.012010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.012043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.012398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.012432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.012817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.012850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.013210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.013243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.013618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.013650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.014001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.014032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.014425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.014458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.014724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.014758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.015016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.015049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.015446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.015479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.015814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.015846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.016185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.016219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.016552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.016585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.016948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.016981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.017356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.017390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.017626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.017661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.017995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.018026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.018267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.018299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.018677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.018709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.019041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.019072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.019439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.019474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.019837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.019868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.020197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.020228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.020619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.020650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.021010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.021043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.021382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.021415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.021777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.021810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.022045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.022076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.022441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.022476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.022877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.022908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.023250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.023283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.023554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.023585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.570 qpair failed and we were unable to recover it. 00:31:52.570 [2024-11-19 09:49:39.023818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.570 [2024-11-19 09:49:39.023851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.024213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.024246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.024670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.024701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.025056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.025088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.025458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.025491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.025850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.025882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.026282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.026317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.026674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.026705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.027067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.027099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.027439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.027471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.027824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.027858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.028254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.028287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.028654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.028686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.029026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.029057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.029425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.029457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.029831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.029862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.030238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.030272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.030629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.030661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.030870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.030899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.031249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.031282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.031707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.031737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.032089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.032119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.032541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.032575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.032930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.032968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.033322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.033358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.033717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.033748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.034117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.034149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.034518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.034550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.034764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.034793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.035026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.035062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.035390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.035423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.035770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.035804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.036180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.036211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.036574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.036604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.036966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.036998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.037336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.037368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.037713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.037743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.038101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.571 [2024-11-19 09:49:39.038132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.571 qpair failed and we were unable to recover it. 00:31:52.571 [2024-11-19 09:49:39.038507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.038539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.038893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.038923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.039290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.039324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.039620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.039652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.039880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.039912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.040308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.040341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.040703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.040733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.041115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.041145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.041395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.041427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.041770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.041801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.042129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.042173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.042427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.042458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.042701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.042731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.043094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.043126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.043510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.043545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.043919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.043950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.044200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.044233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.044475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.044507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.044884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.044915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.045170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.045204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.045453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.045485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.045837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.045867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.046239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.046271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.046605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.046636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.046989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.047020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.047388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.047426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.047788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.047820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.048192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.048225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.048553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.048582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.048929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.048961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.049300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.049332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.049695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.049726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.050095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.050126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.050508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.050540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.050792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.050824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.051195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.051227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.051454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.572 [2024-11-19 09:49:39.051487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.572 qpair failed and we were unable to recover it. 00:31:52.572 [2024-11-19 09:49:39.051833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.051864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.052225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.052257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.052628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.052661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.053021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.053051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.053420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.053453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.053813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.053846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.054201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.054234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.054636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.054666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.055014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.055044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.055380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.055413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.055767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.055797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.056169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.056201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.056420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.056450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.056827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.056859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.057194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.057225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.057630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.057661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.057972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.058001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.058349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.058380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.058732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.058762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.059126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.059157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.059558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.059589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.059952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.059984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.060337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.060370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.060724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.060754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.061110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.061140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.061514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.061545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.061905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.061937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.062299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.062330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.062682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.062714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.063075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.063105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.063463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.063496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.063854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.063886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.064228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.064261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.064663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.064697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.065035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.065065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.065405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.065438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.065799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.065831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.573 qpair failed and we were unable to recover it. 00:31:52.573 [2024-11-19 09:49:39.066190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.573 [2024-11-19 09:49:39.066223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.066578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.066608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.066854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.066886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.067132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.067172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.067536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.067568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.067966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.067996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.068366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.068399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.068757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.068786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.069186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.069218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.069571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.069602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.070022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.070053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.070391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.070423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.070767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.070797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.071145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.071184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.071521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.071553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.071896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.071927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.072289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.072321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.072684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.072716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.073080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.073118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.073506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.073538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.073903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.073934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.074183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.074216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.074590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.074621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.074977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.075008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.075386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.075416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.075773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.075805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.076170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.076203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.076557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.076588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.076950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.574 [2024-11-19 09:49:39.076982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.574 qpair failed and we were unable to recover it. 00:31:52.574 [2024-11-19 09:49:39.077340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.077371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.077731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.077760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.078121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.078152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.078552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.078584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.078937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.078968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.079328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.079361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.079709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.079742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.080097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.080129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.080507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.080540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.080882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.080913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.081267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.081298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.081654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.081684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.082037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.082067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.082427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.082458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.082817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.082847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.083210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.083241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.083618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.083648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.084009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.084039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.084411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.084443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.084844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.084877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.575 qpair failed and we were unable to recover it. 00:31:52.575 [2024-11-19 09:49:39.085263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.575 [2024-11-19 09:49:39.085294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.085642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.085672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.086030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.086061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.086464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.086496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.086846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.086877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.087113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.087145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.087524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.087556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.087908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.087938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.088294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.088327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.088678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.088714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.089067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.089098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.089440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.089473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.089905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.089935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.090292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.090324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.090669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.090699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.091058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.091088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.091439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.091471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.091819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.091850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.092243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.092276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.092631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.092663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.093027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.093058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.576 [2024-11-19 09:49:39.093424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.576 [2024-11-19 09:49:39.093457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.576 qpair failed and we were unable to recover it. 00:31:52.577 [2024-11-19 09:49:39.093804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.577 [2024-11-19 09:49:39.093836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.577 qpair failed and we were unable to recover it. 00:31:52.577 [2024-11-19 09:49:39.094196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.577 [2024-11-19 09:49:39.094228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.577 qpair failed and we were unable to recover it. 00:31:52.577 [2024-11-19 09:49:39.094584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.577 [2024-11-19 09:49:39.094614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.577 qpair failed and we were unable to recover it. 00:31:52.577 [2024-11-19 09:49:39.094974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.577 [2024-11-19 09:49:39.095004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.577 qpair failed and we were unable to recover it. 00:31:52.577 [2024-11-19 09:49:39.095353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.577 [2024-11-19 09:49:39.095384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.577 qpair failed and we were unable to recover it. 00:31:52.577 [2024-11-19 09:49:39.095735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.577 [2024-11-19 09:49:39.095765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.577 qpair failed and we were unable to recover it. 00:31:52.577 [2024-11-19 09:49:39.096119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.577 [2024-11-19 09:49:39.096149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.577 qpair failed and we were unable to recover it. 00:31:52.577 [2024-11-19 09:49:39.096509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.577 [2024-11-19 09:49:39.096541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.577 qpair failed and we were unable to recover it. 00:31:52.577 [2024-11-19 09:49:39.096892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.577 [2024-11-19 09:49:39.096922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.577 qpair failed and we were unable to recover it. 00:31:52.577 [2024-11-19 09:49:39.097276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.577 [2024-11-19 09:49:39.097308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.577 qpair failed and we were unable to recover it. 00:31:52.577 [2024-11-19 09:49:39.097666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.577 [2024-11-19 09:49:39.097696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.577 qpair failed and we were unable to recover it. 00:31:52.577 [2024-11-19 09:49:39.098060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.577 [2024-11-19 09:49:39.098090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.577 qpair failed and we were unable to recover it. 00:31:52.577 [2024-11-19 09:49:39.098444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.577 [2024-11-19 09:49:39.098475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.577 qpair failed and we were unable to recover it. 00:31:52.577 [2024-11-19 09:49:39.098728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.577 [2024-11-19 09:49:39.098761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.577 qpair failed and we were unable to recover it. 00:31:52.577 [2024-11-19 09:49:39.099109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.577 [2024-11-19 09:49:39.099142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.577 qpair failed and we were unable to recover it. 00:31:52.577 [2024-11-19 09:49:39.099511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.577 [2024-11-19 09:49:39.099542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.577 qpair failed and we were unable to recover it. 00:31:52.577 [2024-11-19 09:49:39.099908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.577 [2024-11-19 09:49:39.099941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.577 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.100303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.100335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.100695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.100726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.101088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.101121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.101518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.101549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.101890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.101919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.102276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.102308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.102662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.102692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.103035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.103065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.103418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.103450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.103802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.103832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.104188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.104226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.104580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.104612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.104970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.105001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.105365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.105398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.105779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.105810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.106181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.106214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.106569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.106599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.106954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.578 [2024-11-19 09:49:39.106984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.578 qpair failed and we were unable to recover it. 00:31:52.578 [2024-11-19 09:49:39.107364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.107396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.107745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.107775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.108131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.108181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.108553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.108583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.108947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.108978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.109324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.109355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.109719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.109750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.110110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.110140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.110497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.110528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.110879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.110909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.111270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.111303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.111737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.111767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.112140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.112189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.112575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.112606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.112935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.112965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.113318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.113350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.113688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.113717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.114071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.579 [2024-11-19 09:49:39.114103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.579 qpair failed and we were unable to recover it. 00:31:52.579 [2024-11-19 09:49:39.114496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.580 [2024-11-19 09:49:39.114529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.580 qpair failed and we were unable to recover it. 00:31:52.580 [2024-11-19 09:49:39.114889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.580 [2024-11-19 09:49:39.114920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.580 qpair failed and we were unable to recover it. 00:31:52.580 [2024-11-19 09:49:39.115280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.580 [2024-11-19 09:49:39.115313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.580 qpair failed and we were unable to recover it. 00:31:52.580 [2024-11-19 09:49:39.115646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.580 [2024-11-19 09:49:39.115678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.580 qpair failed and we were unable to recover it. 00:31:52.580 [2024-11-19 09:49:39.116030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.580 [2024-11-19 09:49:39.116061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.580 qpair failed and we were unable to recover it. 00:31:52.580 [2024-11-19 09:49:39.116418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.580 [2024-11-19 09:49:39.116449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.580 qpair failed and we were unable to recover it. 00:31:52.580 [2024-11-19 09:49:39.116810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.580 [2024-11-19 09:49:39.116840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.580 qpair failed and we were unable to recover it. 00:31:52.580 [2024-11-19 09:49:39.117209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.580 [2024-11-19 09:49:39.117240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.580 qpair failed and we were unable to recover it. 00:31:52.580 [2024-11-19 09:49:39.117616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.580 [2024-11-19 09:49:39.117646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.580 qpair failed and we were unable to recover it. 00:31:52.580 [2024-11-19 09:49:39.118002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.580 [2024-11-19 09:49:39.118032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.580 qpair failed and we were unable to recover it. 00:31:52.580 [2024-11-19 09:49:39.118393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.580 [2024-11-19 09:49:39.118425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.580 qpair failed and we were unable to recover it. 00:31:52.580 [2024-11-19 09:49:39.118783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.580 [2024-11-19 09:49:39.118813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.580 qpair failed and we were unable to recover it. 00:31:52.580 [2024-11-19 09:49:39.119189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.580 [2024-11-19 09:49:39.119220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.580 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.119594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.119625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.119981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.120016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.120379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.120411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.120815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.120848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.121231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.121263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.121610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.121640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.121989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.122019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.122392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.122423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.122772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.122802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.123184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.123216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.123564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.123594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.123951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.123981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.124341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.124373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.124732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.124762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.125119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.125149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.125522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.125556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.125909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.125939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.126305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.126337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.126696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.126728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.127091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.127123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.127519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.127552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.581 [2024-11-19 09:49:39.127913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.581 [2024-11-19 09:49:39.127946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.581 qpair failed and we were unable to recover it. 00:31:52.582 [2024-11-19 09:49:39.128297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.582 [2024-11-19 09:49:39.128329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.582 qpair failed and we were unable to recover it. 00:31:52.582 [2024-11-19 09:49:39.128576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.582 [2024-11-19 09:49:39.128608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.582 qpair failed and we were unable to recover it. 00:31:52.582 [2024-11-19 09:49:39.128957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.582 [2024-11-19 09:49:39.128989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.582 qpair failed and we were unable to recover it. 00:31:52.582 [2024-11-19 09:49:39.129360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.582 [2024-11-19 09:49:39.129393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.582 qpair failed and we were unable to recover it. 00:31:52.582 [2024-11-19 09:49:39.129746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.582 [2024-11-19 09:49:39.129777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.582 qpair failed and we were unable to recover it. 00:31:52.582 [2024-11-19 09:49:39.130132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.582 [2024-11-19 09:49:39.130171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.582 qpair failed and we were unable to recover it. 00:31:52.582 [2024-11-19 09:49:39.130561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.582 [2024-11-19 09:49:39.130591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.582 qpair failed and we were unable to recover it. 00:31:52.582 [2024-11-19 09:49:39.130949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.582 [2024-11-19 09:49:39.130980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.582 qpair failed and we were unable to recover it. 00:31:52.582 [2024-11-19 09:49:39.131340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.582 [2024-11-19 09:49:39.131371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.582 qpair failed and we were unable to recover it. 00:31:52.582 [2024-11-19 09:49:39.131727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.582 [2024-11-19 09:49:39.131757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.582 qpair failed and we were unable to recover it. 00:31:52.582 [2024-11-19 09:49:39.131991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.582 [2024-11-19 09:49:39.132021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.582 qpair failed and we were unable to recover it. 00:31:52.582 [2024-11-19 09:49:39.132398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.582 [2024-11-19 09:49:39.132429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.582 qpair failed and we were unable to recover it. 00:31:52.582 [2024-11-19 09:49:39.132786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.582 [2024-11-19 09:49:39.132816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.582 qpair failed and we were unable to recover it. 00:31:52.582 [2024-11-19 09:49:39.133184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.133216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.133558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.133588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.133943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.133974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.134327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.134360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.134717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.134747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.135113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.135143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.135546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.135585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.135932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.135965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.136340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.136374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.136743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.136775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.137125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.137166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.137545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.137577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.137940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.137971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.138336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.138368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.138730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.138761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.139117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.139148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.139544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.139578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.139815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.139849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.140210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.140243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.583 [2024-11-19 09:49:39.140593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.583 [2024-11-19 09:49:39.140624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.583 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.141007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.141038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.141287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.141318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.141671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.141702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.142052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.142084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.142436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.142467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.142827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.142858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.143212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.143244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.143605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.143634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.143984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.144014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.144388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.144421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.144662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.144696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.145085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.145116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.145473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.145505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.145858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.145888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.146261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.146294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.146542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.146575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.146930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.146962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.147309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.147341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.584 [2024-11-19 09:49:39.147703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.584 [2024-11-19 09:49:39.147734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.584 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.148091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.585 [2024-11-19 09:49:39.148121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.585 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.148486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.585 [2024-11-19 09:49:39.148518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.585 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.148854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.585 [2024-11-19 09:49:39.148885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.585 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.149243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.585 [2024-11-19 09:49:39.149274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.585 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.149628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.585 [2024-11-19 09:49:39.149658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.585 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.150015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.585 [2024-11-19 09:49:39.150045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.585 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.150420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.585 [2024-11-19 09:49:39.150452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.585 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.150818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.585 [2024-11-19 09:49:39.150855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.585 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.151204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.585 [2024-11-19 09:49:39.151235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.585 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.151586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.585 [2024-11-19 09:49:39.151617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.585 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.151964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.585 [2024-11-19 09:49:39.151994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.585 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.152352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.585 [2024-11-19 09:49:39.152382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.585 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.152755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.585 [2024-11-19 09:49:39.152785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.585 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.153140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.585 [2024-11-19 09:49:39.153183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.585 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.153533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.585 [2024-11-19 09:49:39.153563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.585 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.153922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.585 [2024-11-19 09:49:39.153952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.585 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.154323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.585 [2024-11-19 09:49:39.154355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.585 qpair failed and we were unable to recover it. 00:31:52.585 [2024-11-19 09:49:39.154594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.586 [2024-11-19 09:49:39.154626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.586 qpair failed and we were unable to recover it. 00:31:52.586 [2024-11-19 09:49:39.154981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.586 [2024-11-19 09:49:39.155012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.586 qpair failed and we were unable to recover it. 00:31:52.586 [2024-11-19 09:49:39.155350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.586 [2024-11-19 09:49:39.155383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.586 qpair failed and we were unable to recover it. 00:31:52.586 [2024-11-19 09:49:39.155740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.586 [2024-11-19 09:49:39.155770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.586 qpair failed and we were unable to recover it. 00:31:52.586 [2024-11-19 09:49:39.156129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.586 [2024-11-19 09:49:39.156171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.586 qpair failed and we were unable to recover it. 00:31:52.586 [2024-11-19 09:49:39.156525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.586 [2024-11-19 09:49:39.156557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.586 qpair failed and we were unable to recover it. 00:31:52.586 [2024-11-19 09:49:39.156912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.586 [2024-11-19 09:49:39.156942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.586 qpair failed and we were unable to recover it. 00:31:52.586 [2024-11-19 09:49:39.157301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.586 [2024-11-19 09:49:39.157333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.586 qpair failed and we were unable to recover it. 00:31:52.586 [2024-11-19 09:49:39.157761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.586 [2024-11-19 09:49:39.157793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.586 qpair failed and we were unable to recover it. 00:31:52.586 [2024-11-19 09:49:39.158169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.586 [2024-11-19 09:49:39.158202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.586 qpair failed and we were unable to recover it. 00:31:52.586 [2024-11-19 09:49:39.158551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.586 [2024-11-19 09:49:39.158582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.586 qpair failed and we were unable to recover it. 00:31:52.586 [2024-11-19 09:49:39.158940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.586 [2024-11-19 09:49:39.158970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.586 qpair failed and we were unable to recover it. 00:31:52.586 [2024-11-19 09:49:39.159342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.586 [2024-11-19 09:49:39.159375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.586 qpair failed and we were unable to recover it. 00:31:52.586 [2024-11-19 09:49:39.159735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.586 [2024-11-19 09:49:39.159765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.586 qpair failed and we were unable to recover it. 00:31:52.586 [2024-11-19 09:49:39.160119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.586 [2024-11-19 09:49:39.160149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.586 qpair failed and we were unable to recover it. 00:31:52.586 [2024-11-19 09:49:39.160519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.587 [2024-11-19 09:49:39.160550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.587 qpair failed and we were unable to recover it. 00:31:52.587 [2024-11-19 09:49:39.160780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.587 [2024-11-19 09:49:39.160813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.587 qpair failed and we were unable to recover it. 00:31:52.587 [2024-11-19 09:49:39.161176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.587 [2024-11-19 09:49:39.161210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.587 qpair failed and we were unable to recover it. 00:31:52.587 [2024-11-19 09:49:39.161571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.587 [2024-11-19 09:49:39.161603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.587 qpair failed and we were unable to recover it. 00:31:52.587 [2024-11-19 09:49:39.161956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.587 [2024-11-19 09:49:39.161987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.587 qpair failed and we were unable to recover it. 00:31:52.587 [2024-11-19 09:49:39.162344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.587 [2024-11-19 09:49:39.162375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.587 qpair failed and we were unable to recover it. 00:31:52.587 [2024-11-19 09:49:39.162720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.587 [2024-11-19 09:49:39.162750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.587 qpair failed and we were unable to recover it. 00:31:52.587 [2024-11-19 09:49:39.163110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.587 [2024-11-19 09:49:39.163139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.587 qpair failed and we were unable to recover it. 00:31:52.587 [2024-11-19 09:49:39.163506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.587 [2024-11-19 09:49:39.163537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.587 qpair failed and we were unable to recover it. 00:31:52.587 [2024-11-19 09:49:39.163897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.587 [2024-11-19 09:49:39.163927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.587 qpair failed and we were unable to recover it. 00:31:52.587 [2024-11-19 09:49:39.164280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.587 [2024-11-19 09:49:39.164311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.587 qpair failed and we were unable to recover it. 00:31:52.587 [2024-11-19 09:49:39.164674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.164705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.165071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.165103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.165499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.165530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.165881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.165911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.166275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.166311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.166543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.166576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.166936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.166968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.167196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.167231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.167578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.167611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.167963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.167994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.168353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.168384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.168761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.168791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.169146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.169189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.169534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.169565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.169927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.169957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.170317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.170348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.170577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.170608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.170976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.171007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.171387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.171420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.171641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.171674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.172019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.172052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.172307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.172343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.588 [2024-11-19 09:49:39.172723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.588 [2024-11-19 09:49:39.172755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.588 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.173120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.173151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.173511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.173542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.173894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.173924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.174285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.174316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.174675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.174706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.175060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.175091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.175447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.175480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.175834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.175864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.176225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.176258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.176621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.176652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.176998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.177030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.177389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.177419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.177772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.177802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.178178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.178210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.178554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.178584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.178932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.178962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.179320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.179352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.179711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.179741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.180105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.180135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.180503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.180534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.180885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.180915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.181280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.181317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.181664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.181695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.182084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.182114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.182465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.182498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.182855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.182885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.183258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.183290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.183639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.183671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.184033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.184063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.184425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.184457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.184813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.184844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.185205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.185238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.185591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.185623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.185984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.186015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.186392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.186425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.186656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.186691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.187058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.187089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.187446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.187477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.187835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.187866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.188228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.188260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.188613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.188644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.188991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.189023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.189385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.189416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.189781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.189814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.190177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.190211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.190531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.190563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.190948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.190979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.589 qpair failed and we were unable to recover it. 00:31:52.589 [2024-11-19 09:49:39.191330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.589 [2024-11-19 09:49:39.191364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.191755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.191787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.192139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.192182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.192537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.192567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.192927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.192958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.193307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.193338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.193739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.193772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.194119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.194149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.194470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.194503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.194853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.194884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.195236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.195267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.195641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.195670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.196031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.196062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.196427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.196459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.196815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.196851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.197202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.197234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.197597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.197630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.198003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.198033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.198395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.198426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.198849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.198880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.199110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.199141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.199470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.199502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.199852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.199882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.200236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.200266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.200635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.200665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.201023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.201053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.201407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.201439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.201791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.201821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.202179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.202212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.202433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.202467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.202823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.202853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.203213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.203245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.203604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.203633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.203995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.204025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.204394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.204425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.204783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.204814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.205232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.205265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.205596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.205626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.205979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.206010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.590 [2024-11-19 09:49:39.206387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.590 [2024-11-19 09:49:39.206420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.590 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.206775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.206807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.591 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.207182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.207216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.591 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.207563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.207595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.591 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.207947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.207981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.591 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.208331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.208364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.591 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.208721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.208752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.591 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.209144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.209198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.591 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.209543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.209575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.591 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.209928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.209960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.591 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.210321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.210355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.591 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.210745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.210776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.591 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.211123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.211155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.591 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.211512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.211543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.591 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.211910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.211942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.591 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.212301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.212339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.591 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.212681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.212712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.591 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.213073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.213105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.591 qpair failed and we were unable to recover it. 00:31:52.591 [2024-11-19 09:49:39.213519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.591 [2024-11-19 09:49:39.213551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.592 qpair failed and we were unable to recover it. 00:31:52.592 [2024-11-19 09:49:39.213910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.592 [2024-11-19 09:49:39.213942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.592 qpair failed and we were unable to recover it. 00:31:52.592 [2024-11-19 09:49:39.214300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.592 [2024-11-19 09:49:39.214332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.592 qpair failed and we were unable to recover it. 00:31:52.592 [2024-11-19 09:49:39.214697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.592 [2024-11-19 09:49:39.214728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.592 qpair failed and we were unable to recover it. 00:31:52.592 [2024-11-19 09:49:39.215077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.592 [2024-11-19 09:49:39.215108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.592 qpair failed and we were unable to recover it. 00:31:52.592 [2024-11-19 09:49:39.215469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.592 [2024-11-19 09:49:39.215500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.592 qpair failed and we were unable to recover it. 00:31:52.592 [2024-11-19 09:49:39.215850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.592 [2024-11-19 09:49:39.215882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.592 qpair failed and we were unable to recover it. 00:31:52.592 [2024-11-19 09:49:39.216241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.592 [2024-11-19 09:49:39.216273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.592 qpair failed and we were unable to recover it. 00:31:52.592 [2024-11-19 09:49:39.216674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.592 [2024-11-19 09:49:39.216706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.592 qpair failed and we were unable to recover it. 00:31:52.592 [2024-11-19 09:49:39.217066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.592 [2024-11-19 09:49:39.217097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.592 qpair failed and we were unable to recover it. 00:31:52.592 [2024-11-19 09:49:39.217378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.592 [2024-11-19 09:49:39.217408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.592 qpair failed and we were unable to recover it. 00:31:52.592 [2024-11-19 09:49:39.217772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.592 [2024-11-19 09:49:39.217802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.592 qpair failed and we were unable to recover it. 00:31:52.592 [2024-11-19 09:49:39.218169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.592 [2024-11-19 09:49:39.218203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.592 qpair failed and we were unable to recover it. 00:31:52.592 [2024-11-19 09:49:39.218548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.593 [2024-11-19 09:49:39.218578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.593 qpair failed and we were unable to recover it. 00:31:52.593 [2024-11-19 09:49:39.218928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.593 [2024-11-19 09:49:39.218959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.593 qpair failed and we were unable to recover it. 00:31:52.593 [2024-11-19 09:49:39.219314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.593 [2024-11-19 09:49:39.219346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.593 qpair failed and we were unable to recover it. 00:31:52.593 [2024-11-19 09:49:39.219697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.593 [2024-11-19 09:49:39.219728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.593 qpair failed and we were unable to recover it. 00:31:52.593 [2024-11-19 09:49:39.220088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.593 [2024-11-19 09:49:39.220119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.593 qpair failed and we were unable to recover it. 00:31:52.593 [2024-11-19 09:49:39.220480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.593 [2024-11-19 09:49:39.220512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.593 qpair failed and we were unable to recover it. 00:31:52.593 [2024-11-19 09:49:39.220874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.593 [2024-11-19 09:49:39.220905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.593 qpair failed and we were unable to recover it. 00:31:52.593 [2024-11-19 09:49:39.221240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.593 [2024-11-19 09:49:39.221272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.593 qpair failed and we were unable to recover it. 00:31:52.593 [2024-11-19 09:49:39.221631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.593 [2024-11-19 09:49:39.221663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.593 qpair failed and we were unable to recover it. 00:31:52.593 [2024-11-19 09:49:39.222011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.593 [2024-11-19 09:49:39.222042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.593 qpair failed and we were unable to recover it. 00:31:52.593 [2024-11-19 09:49:39.222403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.593 [2024-11-19 09:49:39.222434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.593 qpair failed and we were unable to recover it. 00:31:52.594 [2024-11-19 09:49:39.222793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.594 [2024-11-19 09:49:39.222827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.594 qpair failed and we were unable to recover it. 00:31:52.594 [2024-11-19 09:49:39.223192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.594 [2024-11-19 09:49:39.223225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.594 qpair failed and we were unable to recover it. 00:31:52.594 [2024-11-19 09:49:39.223573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.594 [2024-11-19 09:49:39.223603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.594 qpair failed and we were unable to recover it. 00:31:52.594 [2024-11-19 09:49:39.223846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.594 [2024-11-19 09:49:39.223880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.594 qpair failed and we were unable to recover it. 00:31:52.594 [2024-11-19 09:49:39.224231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.594 [2024-11-19 09:49:39.224263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.594 qpair failed and we were unable to recover it. 00:31:52.594 [2024-11-19 09:49:39.224621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.594 [2024-11-19 09:49:39.224654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.594 qpair failed and we were unable to recover it. 00:31:52.594 [2024-11-19 09:49:39.225007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.594 [2024-11-19 09:49:39.225038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.595 qpair failed and we were unable to recover it. 00:31:52.595 [2024-11-19 09:49:39.225497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.595 [2024-11-19 09:49:39.225530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.595 qpair failed and we were unable to recover it. 00:31:52.595 [2024-11-19 09:49:39.226601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.595 [2024-11-19 09:49:39.226652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.595 qpair failed and we were unable to recover it. 00:31:52.595 [2024-11-19 09:49:39.227041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.595 [2024-11-19 09:49:39.227075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.595 qpair failed and we were unable to recover it. 00:31:52.595 [2024-11-19 09:49:39.227434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.595 [2024-11-19 09:49:39.227470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.595 qpair failed and we were unable to recover it. 00:31:52.595 [2024-11-19 09:49:39.227831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.595 [2024-11-19 09:49:39.227862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.595 qpair failed and we were unable to recover it. 00:31:52.595 [2024-11-19 09:49:39.228197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.595 [2024-11-19 09:49:39.228230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.595 qpair failed and we were unable to recover it. 00:31:52.595 [2024-11-19 09:49:39.228626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.595 [2024-11-19 09:49:39.228665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.595 qpair failed and we were unable to recover it. 00:31:52.595 [2024-11-19 09:49:39.229039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.595 [2024-11-19 09:49:39.229071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.595 qpair failed and we were unable to recover it. 00:31:52.595 [2024-11-19 09:49:39.229405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.595 [2024-11-19 09:49:39.229437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.595 qpair failed and we were unable to recover it. 00:31:52.595 [2024-11-19 09:49:39.229774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.595 [2024-11-19 09:49:39.229807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.595 qpair failed and we were unable to recover it. 00:31:52.595 [2024-11-19 09:49:39.230176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.595 [2024-11-19 09:49:39.230209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.595 qpair failed and we were unable to recover it. 00:31:52.595 [2024-11-19 09:49:39.230581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.595 [2024-11-19 09:49:39.230612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.595 qpair failed and we were unable to recover it. 00:31:52.595 [2024-11-19 09:49:39.230965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.596 [2024-11-19 09:49:39.230995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.596 qpair failed and we were unable to recover it. 00:31:52.596 [2024-11-19 09:49:39.231344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.596 [2024-11-19 09:49:39.231376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.596 qpair failed and we were unable to recover it. 00:31:52.596 [2024-11-19 09:49:39.231733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.596 [2024-11-19 09:49:39.231763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.596 qpair failed and we were unable to recover it. 00:31:52.596 [2024-11-19 09:49:39.232121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.596 [2024-11-19 09:49:39.232153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.596 qpair failed and we were unable to recover it. 00:31:52.596 [2024-11-19 09:49:39.232547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.596 [2024-11-19 09:49:39.232577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.596 qpair failed and we were unable to recover it. 00:31:52.596 [2024-11-19 09:49:39.232935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.596 [2024-11-19 09:49:39.232966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.596 qpair failed and we were unable to recover it. 00:31:52.596 [2024-11-19 09:49:39.233328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.596 [2024-11-19 09:49:39.233362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.596 qpair failed and we were unable to recover it. 00:31:52.596 [2024-11-19 09:49:39.233725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.596 [2024-11-19 09:49:39.233756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.596 qpair failed and we were unable to recover it. 00:31:52.596 [2024-11-19 09:49:39.234118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.596 [2024-11-19 09:49:39.234149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.596 qpair failed and we were unable to recover it. 00:31:52.596 [2024-11-19 09:49:39.234520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.596 [2024-11-19 09:49:39.234554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.596 qpair failed and we were unable to recover it. 00:31:52.596 [2024-11-19 09:49:39.234900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.596 [2024-11-19 09:49:39.234931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.596 qpair failed and we were unable to recover it. 00:31:52.596 [2024-11-19 09:49:39.235295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.596 [2024-11-19 09:49:39.235326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.596 qpair failed and we were unable to recover it. 00:31:52.597 [2024-11-19 09:49:39.235689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.597 [2024-11-19 09:49:39.235721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.597 qpair failed and we were unable to recover it. 00:31:52.597 [2024-11-19 09:49:39.236068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.597 [2024-11-19 09:49:39.236100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.597 qpair failed and we were unable to recover it. 00:31:52.597 [2024-11-19 09:49:39.236457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.597 [2024-11-19 09:49:39.236488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.597 qpair failed and we were unable to recover it. 00:31:52.597 [2024-11-19 09:49:39.236850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.597 [2024-11-19 09:49:39.236882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.597 qpair failed and we were unable to recover it. 00:31:52.597 [2024-11-19 09:49:39.237248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.597 [2024-11-19 09:49:39.237282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.597 qpair failed and we were unable to recover it. 00:31:52.597 [2024-11-19 09:49:39.237638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.597 [2024-11-19 09:49:39.237669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.597 qpair failed and we were unable to recover it. 00:31:52.597 [2024-11-19 09:49:39.238037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.597 [2024-11-19 09:49:39.238068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.597 qpair failed and we were unable to recover it. 00:31:52.597 [2024-11-19 09:49:39.238467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.597 [2024-11-19 09:49:39.238501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.597 qpair failed and we were unable to recover it. 00:31:52.597 [2024-11-19 09:49:39.238856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.597 [2024-11-19 09:49:39.238887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.597 qpair failed and we were unable to recover it. 00:31:52.597 [2024-11-19 09:49:39.239250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.597 [2024-11-19 09:49:39.239283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.598 qpair failed and we were unable to recover it. 00:31:52.598 [2024-11-19 09:49:39.239636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.598 [2024-11-19 09:49:39.239668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.598 qpair failed and we were unable to recover it. 00:31:52.598 [2024-11-19 09:49:39.239901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.598 [2024-11-19 09:49:39.239934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.598 qpair failed and we were unable to recover it. 00:31:52.598 [2024-11-19 09:49:39.240277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.598 [2024-11-19 09:49:39.240308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.598 qpair failed and we were unable to recover it. 00:31:52.598 [2024-11-19 09:49:39.240703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.598 [2024-11-19 09:49:39.240735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.598 qpair failed and we were unable to recover it. 00:31:52.598 [2024-11-19 09:49:39.241089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.598 [2024-11-19 09:49:39.241119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.598 qpair failed and we were unable to recover it. 00:31:52.598 [2024-11-19 09:49:39.241479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.598 [2024-11-19 09:49:39.241511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.598 qpair failed and we were unable to recover it. 00:31:52.598 [2024-11-19 09:49:39.241756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.598 [2024-11-19 09:49:39.241790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.598 qpair failed and we were unable to recover it. 00:31:52.598 [2024-11-19 09:49:39.242136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.598 [2024-11-19 09:49:39.242186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.598 qpair failed and we were unable to recover it. 00:31:52.598 [2024-11-19 09:49:39.242541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.598 [2024-11-19 09:49:39.242572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.598 qpair failed and we were unable to recover it. 00:31:52.598 [2024-11-19 09:49:39.242813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.598 [2024-11-19 09:49:39.242844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.598 qpair failed and we were unable to recover it. 00:31:52.598 [2024-11-19 09:49:39.243195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.598 [2024-11-19 09:49:39.243228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.599 qpair failed and we were unable to recover it. 00:31:52.599 [2024-11-19 09:49:39.243581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.599 [2024-11-19 09:49:39.243613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.599 qpair failed and we were unable to recover it. 00:31:52.599 [2024-11-19 09:49:39.243969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.599 [2024-11-19 09:49:39.244022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.599 qpair failed and we were unable to recover it. 00:31:52.599 [2024-11-19 09:49:39.244387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.599 [2024-11-19 09:49:39.244420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.599 qpair failed and we were unable to recover it. 00:31:52.599 [2024-11-19 09:49:39.244763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.599 [2024-11-19 09:49:39.244793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.599 qpair failed and we were unable to recover it. 00:31:52.599 [2024-11-19 09:49:39.245147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.599 [2024-11-19 09:49:39.245188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.599 qpair failed and we were unable to recover it. 00:31:52.599 [2024-11-19 09:49:39.245511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.599 [2024-11-19 09:49:39.245542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.599 qpair failed and we were unable to recover it. 00:31:52.599 [2024-11-19 09:49:39.245894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.599 [2024-11-19 09:49:39.245926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.599 qpair failed and we were unable to recover it. 00:31:52.599 [2024-11-19 09:49:39.246281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.599 [2024-11-19 09:49:39.246313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.599 qpair failed and we were unable to recover it. 00:31:52.599 [2024-11-19 09:49:39.246679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.599 [2024-11-19 09:49:39.246711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.599 qpair failed and we were unable to recover it. 00:31:52.599 [2024-11-19 09:49:39.247081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.599 [2024-11-19 09:49:39.247114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.599 qpair failed and we were unable to recover it. 00:31:52.599 [2024-11-19 09:49:39.247514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.599 [2024-11-19 09:49:39.247547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.599 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.247893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.247927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.248279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.248312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.248676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.248707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.248965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.248998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.249436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.249469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.249723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.249752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.250105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.250136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.250508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.250542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.250918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.250950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.251318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.251349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.251614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.251644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.251941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.251971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.252325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.252357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.252714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.252744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.253112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.253142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.253537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.253570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.253926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.253955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.254316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.254350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.254701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.254733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.254968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.254998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.600 qpair failed and we were unable to recover it. 00:31:52.600 [2024-11-19 09:49:39.255245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.600 [2024-11-19 09:49:39.255276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.601 qpair failed and we were unable to recover it. 00:31:52.601 [2024-11-19 09:49:39.255640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.601 [2024-11-19 09:49:39.255670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.601 qpair failed and we were unable to recover it. 00:31:52.601 [2024-11-19 09:49:39.256064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.601 [2024-11-19 09:49:39.256095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.601 qpair failed and we were unable to recover it. 00:31:52.601 [2024-11-19 09:49:39.256427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.601 [2024-11-19 09:49:39.256459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.601 qpair failed and we were unable to recover it. 00:31:52.601 [2024-11-19 09:49:39.256823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.601 [2024-11-19 09:49:39.256856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.601 qpair failed and we were unable to recover it. 00:31:52.601 [2024-11-19 09:49:39.257221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.601 [2024-11-19 09:49:39.257252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.601 qpair failed and we were unable to recover it. 00:31:52.601 [2024-11-19 09:49:39.257612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.601 [2024-11-19 09:49:39.257642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.601 qpair failed and we were unable to recover it. 00:31:52.601 [2024-11-19 09:49:39.258003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.601 [2024-11-19 09:49:39.258035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.601 qpair failed and we were unable to recover it. 00:31:52.601 [2024-11-19 09:49:39.258410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.601 [2024-11-19 09:49:39.258441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.601 qpair failed and we were unable to recover it. 00:31:52.601 [2024-11-19 09:49:39.258791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.601 [2024-11-19 09:49:39.258822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.601 qpair failed and we were unable to recover it. 00:31:52.601 [2024-11-19 09:49:39.259182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.601 [2024-11-19 09:49:39.259217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.601 qpair failed and we were unable to recover it. 00:31:52.601 [2024-11-19 09:49:39.259569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.601 [2024-11-19 09:49:39.259601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.601 qpair failed and we were unable to recover it. 00:31:52.601 [2024-11-19 09:49:39.259817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.602 [2024-11-19 09:49:39.259847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.602 qpair failed and we were unable to recover it. 00:31:52.602 [2024-11-19 09:49:39.260204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.602 [2024-11-19 09:49:39.260236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.602 qpair failed and we were unable to recover it. 00:31:52.602 [2024-11-19 09:49:39.260582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.602 [2024-11-19 09:49:39.260611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.602 qpair failed and we were unable to recover it. 00:31:52.602 [2024-11-19 09:49:39.260969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.602 [2024-11-19 09:49:39.261000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.602 qpair failed and we were unable to recover it. 00:31:52.602 [2024-11-19 09:49:39.261344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.602 [2024-11-19 09:49:39.261375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.602 qpair failed and we were unable to recover it. 00:31:52.602 [2024-11-19 09:49:39.261742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.602 [2024-11-19 09:49:39.261774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.602 qpair failed and we were unable to recover it. 00:31:52.602 [2024-11-19 09:49:39.262172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.602 [2024-11-19 09:49:39.262203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.602 qpair failed and we were unable to recover it. 00:31:52.603 [2024-11-19 09:49:39.262566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.603 [2024-11-19 09:49:39.262598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.603 qpair failed and we were unable to recover it. 00:31:52.603 [2024-11-19 09:49:39.263024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.603 [2024-11-19 09:49:39.263056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.603 qpair failed and we were unable to recover it. 00:31:52.603 [2024-11-19 09:49:39.263413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.603 [2024-11-19 09:49:39.263445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.603 qpair failed and we were unable to recover it. 00:31:52.603 [2024-11-19 09:49:39.263808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.603 [2024-11-19 09:49:39.263839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.603 qpair failed and we were unable to recover it. 00:31:52.603 [2024-11-19 09:49:39.264197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.603 [2024-11-19 09:49:39.264229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.603 qpair failed and we were unable to recover it. 00:31:52.603 [2024-11-19 09:49:39.264611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.603 [2024-11-19 09:49:39.264642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.603 qpair failed and we were unable to recover it. 00:31:52.603 [2024-11-19 09:49:39.265012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.603 [2024-11-19 09:49:39.265044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.603 qpair failed and we were unable to recover it. 00:31:52.603 [2024-11-19 09:49:39.265404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.603 [2024-11-19 09:49:39.265435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.603 qpair failed and we were unable to recover it. 00:31:52.603 [2024-11-19 09:49:39.265770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.603 [2024-11-19 09:49:39.265801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.603 qpair failed and we were unable to recover it. 00:31:52.604 [2024-11-19 09:49:39.266151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.604 [2024-11-19 09:49:39.266194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.604 qpair failed and we were unable to recover it. 00:31:52.604 [2024-11-19 09:49:39.266555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.604 [2024-11-19 09:49:39.266585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.604 qpair failed and we were unable to recover it. 00:31:52.604 [2024-11-19 09:49:39.266957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.604 [2024-11-19 09:49:39.266988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.604 qpair failed and we were unable to recover it. 00:31:52.604 [2024-11-19 09:49:39.267336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.604 [2024-11-19 09:49:39.267367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.604 qpair failed and we were unable to recover it. 00:31:52.604 [2024-11-19 09:49:39.267715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.604 [2024-11-19 09:49:39.267744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.604 qpair failed and we were unable to recover it. 00:31:52.604 [2024-11-19 09:49:39.268102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.604 [2024-11-19 09:49:39.268136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.604 qpair failed and we were unable to recover it. 00:31:52.604 [2024-11-19 09:49:39.268515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.604 [2024-11-19 09:49:39.268549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.604 qpair failed and we were unable to recover it. 00:31:52.604 [2024-11-19 09:49:39.268909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.604 [2024-11-19 09:49:39.268940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.604 qpair failed and we were unable to recover it. 00:31:52.604 [2024-11-19 09:49:39.269303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.604 [2024-11-19 09:49:39.269335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.604 qpair failed and we were unable to recover it. 00:31:52.604 [2024-11-19 09:49:39.269697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.604 [2024-11-19 09:49:39.269733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.604 qpair failed and we were unable to recover it. 00:31:52.604 [2024-11-19 09:49:39.270079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.604 [2024-11-19 09:49:39.270109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.604 qpair failed and we were unable to recover it. 00:31:52.604 [2024-11-19 09:49:39.270495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.604 [2024-11-19 09:49:39.270529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.604 qpair failed and we were unable to recover it. 00:31:52.604 [2024-11-19 09:49:39.270883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.604 [2024-11-19 09:49:39.270914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.604 qpair failed and we were unable to recover it. 00:31:52.605 [2024-11-19 09:49:39.271276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.605 [2024-11-19 09:49:39.271310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.605 qpair failed and we were unable to recover it. 00:31:52.605 [2024-11-19 09:49:39.271509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.605 [2024-11-19 09:49:39.271542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.605 qpair failed and we were unable to recover it. 00:31:52.605 [2024-11-19 09:49:39.271923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.605 [2024-11-19 09:49:39.271955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.605 qpair failed and we were unable to recover it. 00:31:52.605 [2024-11-19 09:49:39.272317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.605 [2024-11-19 09:49:39.272352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.605 qpair failed and we were unable to recover it. 00:31:52.605 [2024-11-19 09:49:39.272765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.605 [2024-11-19 09:49:39.272796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.605 qpair failed and we were unable to recover it. 00:31:52.605 [2024-11-19 09:49:39.273148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.605 [2024-11-19 09:49:39.273194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.605 qpair failed and we were unable to recover it. 00:31:52.605 [2024-11-19 09:49:39.273440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.605 [2024-11-19 09:49:39.273471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.605 qpair failed and we were unable to recover it. 00:31:52.605 [2024-11-19 09:49:39.273686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.605 [2024-11-19 09:49:39.273716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.605 qpair failed and we were unable to recover it. 00:31:52.605 [2024-11-19 09:49:39.274065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.605 [2024-11-19 09:49:39.274096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.605 qpair failed and we were unable to recover it. 00:31:52.605 [2024-11-19 09:49:39.274349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.605 [2024-11-19 09:49:39.274382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.605 qpair failed and we were unable to recover it. 00:31:52.605 [2024-11-19 09:49:39.274746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.605 [2024-11-19 09:49:39.274779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.605 qpair failed and we were unable to recover it. 00:31:52.605 [2024-11-19 09:49:39.275128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.275168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.606 [2024-11-19 09:49:39.275524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.275554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.606 [2024-11-19 09:49:39.275929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.275959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.606 [2024-11-19 09:49:39.276327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.276358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.606 [2024-11-19 09:49:39.276601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.276633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.606 [2024-11-19 09:49:39.277012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.277042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.606 [2024-11-19 09:49:39.277407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.277439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.606 [2024-11-19 09:49:39.277847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.277880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.606 [2024-11-19 09:49:39.278066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.278098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.606 [2024-11-19 09:49:39.278490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.278522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.606 [2024-11-19 09:49:39.278868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.278899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.606 [2024-11-19 09:49:39.279252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.279284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.606 [2024-11-19 09:49:39.279551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.279580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.606 [2024-11-19 09:49:39.279859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.279891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.606 [2024-11-19 09:49:39.280247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.280279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.606 [2024-11-19 09:49:39.280659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.280689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.606 [2024-11-19 09:49:39.281051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.281083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.606 [2024-11-19 09:49:39.281460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.606 [2024-11-19 09:49:39.281492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.606 qpair failed and we were unable to recover it. 00:31:52.607 [2024-11-19 09:49:39.281861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.607 [2024-11-19 09:49:39.281893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.607 qpair failed and we were unable to recover it. 00:31:52.607 [2024-11-19 09:49:39.282255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.607 [2024-11-19 09:49:39.282286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.607 qpair failed and we were unable to recover it. 00:31:52.607 [2024-11-19 09:49:39.282474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.607 [2024-11-19 09:49:39.282504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.607 qpair failed and we were unable to recover it. 00:31:52.607 [2024-11-19 09:49:39.282740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.607 [2024-11-19 09:49:39.282773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.607 qpair failed and we were unable to recover it. 00:31:52.607 [2024-11-19 09:49:39.283014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.607 [2024-11-19 09:49:39.283044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.607 qpair failed and we were unable to recover it. 00:31:52.607 [2024-11-19 09:49:39.283420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.607 [2024-11-19 09:49:39.283451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.607 qpair failed and we were unable to recover it. 00:31:52.607 [2024-11-19 09:49:39.283813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.607 [2024-11-19 09:49:39.283844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.607 qpair failed and we were unable to recover it. 00:31:52.607 [2024-11-19 09:49:39.284191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.607 [2024-11-19 09:49:39.284228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.607 qpair failed and we were unable to recover it. 00:31:52.607 [2024-11-19 09:49:39.284597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.607 [2024-11-19 09:49:39.284630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.607 qpair failed and we were unable to recover it. 00:31:52.607 [2024-11-19 09:49:39.284995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.607 [2024-11-19 09:49:39.285026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.607 qpair failed and we were unable to recover it. 00:31:52.607 [2024-11-19 09:49:39.285387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.607 [2024-11-19 09:49:39.285420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.607 qpair failed and we were unable to recover it. 00:31:52.607 [2024-11-19 09:49:39.285759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.607 [2024-11-19 09:49:39.285790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.607 qpair failed and we were unable to recover it. 00:31:52.607 [2024-11-19 09:49:39.286040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.608 [2024-11-19 09:49:39.286072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.608 qpair failed and we were unable to recover it. 00:31:52.608 [2024-11-19 09:49:39.286440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.608 [2024-11-19 09:49:39.286472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.608 qpair failed and we were unable to recover it. 00:31:52.608 [2024-11-19 09:49:39.286836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.608 [2024-11-19 09:49:39.286867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.608 qpair failed and we were unable to recover it. 00:31:52.608 [2024-11-19 09:49:39.287288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.608 [2024-11-19 09:49:39.287320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.608 qpair failed and we were unable to recover it. 00:31:52.608 [2024-11-19 09:49:39.287662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.608 [2024-11-19 09:49:39.287696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.608 qpair failed and we were unable to recover it. 00:31:52.608 [2024-11-19 09:49:39.288042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.608 [2024-11-19 09:49:39.288073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.608 qpair failed and we were unable to recover it. 00:31:52.608 [2024-11-19 09:49:39.288444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.608 [2024-11-19 09:49:39.288477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.608 qpair failed and we were unable to recover it. 00:31:52.608 [2024-11-19 09:49:39.288858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.608 [2024-11-19 09:49:39.288890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.609 qpair failed and we were unable to recover it. 00:31:52.609 [2024-11-19 09:49:39.289280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.609 [2024-11-19 09:49:39.289312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.609 qpair failed and we were unable to recover it. 00:31:52.609 [2024-11-19 09:49:39.289656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.609 [2024-11-19 09:49:39.289690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.609 qpair failed and we were unable to recover it. 00:31:52.609 [2024-11-19 09:49:39.290047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.609 [2024-11-19 09:49:39.290076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.609 qpair failed and we were unable to recover it. 00:31:52.609 [2024-11-19 09:49:39.290334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.609 [2024-11-19 09:49:39.290365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.609 qpair failed and we were unable to recover it. 00:31:52.609 [2024-11-19 09:49:39.290629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.609 [2024-11-19 09:49:39.290662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.609 qpair failed and we were unable to recover it. 00:31:52.609 [2024-11-19 09:49:39.291005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.609 [2024-11-19 09:49:39.291036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.609 qpair failed and we were unable to recover it. 00:31:52.609 [2024-11-19 09:49:39.291413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.610 [2024-11-19 09:49:39.291447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.610 qpair failed and we were unable to recover it. 00:31:52.610 [2024-11-19 09:49:39.291822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.610 [2024-11-19 09:49:39.291853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.610 qpair failed and we were unable to recover it. 00:31:52.610 [2024-11-19 09:49:39.292256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.610 [2024-11-19 09:49:39.292290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.610 qpair failed and we were unable to recover it. 00:31:52.610 [2024-11-19 09:49:39.292555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.610 [2024-11-19 09:49:39.292586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.610 qpair failed and we were unable to recover it. 00:31:52.610 [2024-11-19 09:49:39.292981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.610 [2024-11-19 09:49:39.293013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.610 qpair failed and we were unable to recover it. 00:31:52.610 [2024-11-19 09:49:39.293347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.610 [2024-11-19 09:49:39.293380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.610 qpair failed and we were unable to recover it. 00:31:52.610 [2024-11-19 09:49:39.293741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.610 [2024-11-19 09:49:39.293773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.610 qpair failed and we were unable to recover it. 00:31:52.610 [2024-11-19 09:49:39.294123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.610 [2024-11-19 09:49:39.294153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.610 qpair failed and we were unable to recover it. 00:31:52.610 [2024-11-19 09:49:39.294562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.610 [2024-11-19 09:49:39.294593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.610 qpair failed and we were unable to recover it. 00:31:52.610 [2024-11-19 09:49:39.294953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.611 [2024-11-19 09:49:39.294984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.611 qpair failed and we were unable to recover it. 00:31:52.611 [2024-11-19 09:49:39.295346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.611 [2024-11-19 09:49:39.295377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.611 qpair failed and we were unable to recover it. 00:31:52.611 [2024-11-19 09:49:39.295612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.611 [2024-11-19 09:49:39.295642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.611 qpair failed and we were unable to recover it. 00:31:52.611 [2024-11-19 09:49:39.295868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.611 [2024-11-19 09:49:39.295903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.611 qpair failed and we were unable to recover it. 00:31:52.611 [2024-11-19 09:49:39.296144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.611 [2024-11-19 09:49:39.296201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.611 qpair failed and we were unable to recover it. 00:31:52.611 [2024-11-19 09:49:39.296555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.611 [2024-11-19 09:49:39.296588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.611 qpair failed and we were unable to recover it. 00:31:52.611 [2024-11-19 09:49:39.296959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.611 [2024-11-19 09:49:39.296989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.611 qpair failed and we were unable to recover it. 00:31:52.611 [2024-11-19 09:49:39.297348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.611 [2024-11-19 09:49:39.297382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.611 qpair failed and we were unable to recover it. 00:31:52.611 [2024-11-19 09:49:39.297740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.611 [2024-11-19 09:49:39.297770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.611 qpair failed and we were unable to recover it. 00:31:52.611 [2024-11-19 09:49:39.298130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.611 [2024-11-19 09:49:39.298173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.611 qpair failed and we were unable to recover it. 00:31:52.611 [2024-11-19 09:49:39.298544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.611 [2024-11-19 09:49:39.298575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.611 qpair failed and we were unable to recover it. 00:31:52.611 [2024-11-19 09:49:39.298919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.611 [2024-11-19 09:49:39.298950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.611 qpair failed and we were unable to recover it. 00:31:52.611 [2024-11-19 09:49:39.299287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.611 [2024-11-19 09:49:39.299326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.611 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.299706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.299740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.300097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.300129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.300579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.300610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.300987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.301019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.301331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.301364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.301701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.301734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.302101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.302133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.302476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.302508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.302870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.302903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.303231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.303263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.303636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.303669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.304029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.304061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.304416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.304449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.304679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.304710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.305074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.305106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.305542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.305575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.305923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.305955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.306309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.306342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.306688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.306719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.307097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.307129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.307499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.307530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.307881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.307913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.308272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.308304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.308663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.308694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.309053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.309085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.309429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.309461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.309815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.309846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.310207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.310239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.310615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.310647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.311016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.311046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.311406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.311437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.311680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.311714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.311954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.311984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.312343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.312376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.890 [2024-11-19 09:49:39.312726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.890 [2024-11-19 09:49:39.312757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.890 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.313105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.313135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.313510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.313543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.313897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.313927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.314280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.314312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.314679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.314715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.315066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.315096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.315335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.315369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.315716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.315746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.316103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.316134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.316527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.316559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.316906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.316935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.317285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.317317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.317671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.317703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.318068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.318100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.318442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.318474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.318823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.318855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.319218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.319251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.319647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.319678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.320066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.320097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.320448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.320479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.320842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.320871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.321239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.321274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.321643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.321674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.322027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.322059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.322414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.322446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.322797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.322828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.323186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.323218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.323580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.323611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.323972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.324004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.324355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.324386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.324744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.324776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.325133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.325179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.325436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.325470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.325818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.325850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.326215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.326248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.326658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.326689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.327056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.891 [2024-11-19 09:49:39.327088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.891 qpair failed and we were unable to recover it. 00:31:52.891 [2024-11-19 09:49:39.327441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.327474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.327871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.327903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.328253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.328285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.328630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.328664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.329018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.329050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.329410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.329443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.329794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.329826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.330185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.330225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.330621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.330653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.331001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.331032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.331394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.331427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.331792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.331822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.332187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.332221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.332614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.332645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.333001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.333033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.333395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.333428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.333761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.333793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.334142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.334213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.334591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.334622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.334983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.335015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.335280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.335312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.335718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.335749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.336105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.336136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.336500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.336535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.336890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.336921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.337263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.337298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.337653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.337684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.338034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.338066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.338404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.338436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.338787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.338820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.339197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.339228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.339589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.339619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.339982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.340013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.340380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.340412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.340767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.340800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.341170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.341202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.341575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.341606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.892 [2024-11-19 09:49:39.341949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.892 [2024-11-19 09:49:39.341981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.892 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.342420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.342452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.342809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.342839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.343204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.343239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.343607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.343637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.343990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.344022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.344350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.344384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.344639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.344672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.345025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.345056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.345415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.345447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.345800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.345839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.346194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.346224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.346562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.346593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.346950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.346981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.347342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.347374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.347734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.347765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.348125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.348168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.348536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.348567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.348912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.348944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.349313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.349346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.349709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.349740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.349986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.350019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.350350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.350383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.350744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.350776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.351176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.351210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.351554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.351585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.351951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.351984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.352352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.352385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.352813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.352844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.353190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.353223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.353613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.353645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.354004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.354036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.354417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.354450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.354809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.354842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.355200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.355233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.355587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.355619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.355983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.356016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.356390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.356423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.893 qpair failed and we were unable to recover it. 00:31:52.893 [2024-11-19 09:49:39.356769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.893 [2024-11-19 09:49:39.356802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.357153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.357196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.357539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.357572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.357933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.357964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.358325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.358359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.358691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.358722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.359090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.359123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.359514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.359546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.359894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.359927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.360288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.360321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.360681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.360714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.361057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.361088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.361448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.361489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.361836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.361868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.362225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.362258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.362625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.362656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.363013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.363046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.363451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.363483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.363868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.363900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.364253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.364286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.364654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.364685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.365046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.365078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.365428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.365459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.365821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.365851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.366213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.366247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.366612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.366643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.367023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.367056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.367413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.367445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.367781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.367813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.368179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.368212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.368567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.894 [2024-11-19 09:49:39.368597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.894 qpair failed and we were unable to recover it. 00:31:52.894 [2024-11-19 09:49:39.368957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.368988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.369345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.369379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.369713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.369743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.370094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.370126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.370499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.370530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.370904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.370936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.371302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.371335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.371683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.371714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.372080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.372112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.372469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.372503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.372854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.372884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.373239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.373273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.373654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.373685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.374045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.374078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.374432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.374464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.374833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.374865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.375222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.375253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.375615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.375647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.376006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.376037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.376394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.376427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.376779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.376810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.377154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.377205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.377688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.377719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.378064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.378097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.378352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.378388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.378740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.378771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.379118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.379147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.379536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.379567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.379926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.379958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.380316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.380350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.380706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.380736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.381096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.381127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.381494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.381527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.381882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.381912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.382291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.382324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.382694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.382726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.383074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.383105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.383497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.895 [2024-11-19 09:49:39.383529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.895 qpair failed and we were unable to recover it. 00:31:52.895 [2024-11-19 09:49:39.383891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.383923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.384367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.384398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.384748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.384780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.385131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.385172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.385547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.385579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.385937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.385969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.386341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.386373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.386724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.386755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.387110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.387139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.387508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.387538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.387894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.387925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.388338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.388370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.388728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.388760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.389150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.389191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.389565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.389595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.389957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.389989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.390357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.390389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.390743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.390773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.391130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.391169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.391535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.391567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.391919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.391951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.392287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.392319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.392560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.392591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.392827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.392870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.393250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.393282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.393641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.393673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.394109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.394140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.394516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.394549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.394939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.394971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.395315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.395348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.395706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.395735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.396095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.396124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.396487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.396519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.396881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.396912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.397270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.397303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.397657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.397689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.398047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.398078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.896 [2024-11-19 09:49:39.398441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.896 [2024-11-19 09:49:39.398475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.896 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.398825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.398855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.399216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.399247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.399604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.399635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.399987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.400018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.400427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.400458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.400805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.400835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.401196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.401227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.401592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.401623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.401975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.402005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.402350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.402382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.402745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.402774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.403125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.403155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.403521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.403553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.403903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.403933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.404294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.404325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.404685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.404714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.405082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.405113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.405470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.405502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.405845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.405875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.406233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.406265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.406614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.406646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.407009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.407039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.407401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.407435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.407788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.407819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.408183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.408216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.408565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.408602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.408998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.409029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.409385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.409418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.409778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.409809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.410172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.410206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.410604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.410635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.410991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.411022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.411390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.411421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.411777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.411806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.412172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.412204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.412551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.412583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.412947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.412978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.897 qpair failed and we were unable to recover it. 00:31:52.897 [2024-11-19 09:49:39.413348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.897 [2024-11-19 09:49:39.413380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.413744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.413774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.414129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.414180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.414553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.414585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.414931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.414961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.415331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.415363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.415716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.415747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.416186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.416218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.416566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.416597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.416956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.416986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.417353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.417385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.417745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.417776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.418131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.418185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.418568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.418599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.418956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.418986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.419338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.419371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.419723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.419753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.420116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.420145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.420525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.420556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.420915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.420947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.421303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.421336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.421700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.421729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.422087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.422118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.422494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.422527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.422886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.422917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.423274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.423307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.423672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.423702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.424063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.424094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.424456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.424495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.424723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.424756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.425110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.425142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.425515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.425546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.425981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.426012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.426369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.426401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.426764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.426793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.427193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.427225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.427578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.427610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.427965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.898 [2024-11-19 09:49:39.427995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.898 qpair failed and we were unable to recover it. 00:31:52.898 [2024-11-19 09:49:39.428346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.428378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.428742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.428772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.429202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.429235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.429593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.429623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.429979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.430010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.430383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.430414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.430760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.430789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.431147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.431187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.431533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.431564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.431921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.431952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.432310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.432341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.432742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.432772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.433122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.433153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.433511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.433542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.433894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.433924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.434288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.434320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.434665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.434696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.434954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.434985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.435333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.435365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.435718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.435747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.436104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.436135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.436533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.436565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.436920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.436951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.437297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.437328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.437693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.437723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.438085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.438114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.438476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.438508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.438881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.438910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.439264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.439296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.439651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.439684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.440030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.440067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.440423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.440455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.899 [2024-11-19 09:49:39.440806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.899 [2024-11-19 09:49:39.440836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.899 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.441203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.441235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.441613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.441643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.442005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.442035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.442361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.442393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.442744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.442774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.443133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.443174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.443512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.443547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.443900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.443933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.444288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.444320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.444667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.444697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.445054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.445084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.445443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.445475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.445828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.445857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.446220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.446251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.446613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.446643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.447005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.447035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.447393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.447424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.447674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.447706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.448033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.448063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.448450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.448484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.448845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.448875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.449215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.449246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.449477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.449510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.449861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.449893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.450219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.450252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.450679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.450709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.451051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.451082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.451435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.451467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.451819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.451849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.452215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.452247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.452600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.452629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.452993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.453023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.453388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.453420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.453785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.453814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.454156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.454196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.454539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.454570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.454932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.454962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.900 qpair failed and we were unable to recover it. 00:31:52.900 [2024-11-19 09:49:39.455331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.900 [2024-11-19 09:49:39.455362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.455729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.455759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.456114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.456144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.456393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.456426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.456787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.456818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.457189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.457220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.457573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.457603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.458051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.458081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.458433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.458466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.458832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.458862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.459219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.459249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.459608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.459638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.459991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.460022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.460443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.460474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.460706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.460739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.461100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.461132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.461491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.461523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.461884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.461915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.462275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.462307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.462671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.462701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.463037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.463067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.463427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.463458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.463805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.463835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.464194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.464226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.464582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.464611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.464972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.465003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.465361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.465394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.465745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.465782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.466135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.466177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.466494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.466526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.466882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.466913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.467270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.467303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.467524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.467555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.467903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.467934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.468296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.468329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.468685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.468716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.469079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.469110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.469478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.901 [2024-11-19 09:49:39.469509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.901 qpair failed and we were unable to recover it. 00:31:52.901 [2024-11-19 09:49:39.469864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.469894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.470258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.470290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.470641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.470672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.471030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.471061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.471423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.471457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.471818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.471847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.472190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.472222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.472572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.472602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.472959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.472989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.473347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.473379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.473745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.473775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.474144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.474182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.474533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.474564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.474913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.474947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.475302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.475334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.475734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.475766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.476123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.476155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.476532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.476562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.476919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.476949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.477313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.477344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.477708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.477738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.478093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.478123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.478483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.478516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.478874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.478904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.479262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.479294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.479650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.479680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.480039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.480069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.480432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.480463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.480845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.480877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.481229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.481267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.481616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.481646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.482008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.482040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.482411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.482443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.482813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.482843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.483204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.483236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.483588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.483622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.483966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.483996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.902 [2024-11-19 09:49:39.484269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.902 [2024-11-19 09:49:39.484300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.902 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.484644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.484674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.485032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.485061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.485417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.485449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.485810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.485840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.486193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.486224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.486586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.486616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.486964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.486994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.487403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.487437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.487809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.487839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.488196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.488227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.488579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.488609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.488967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.488996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.489400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.489432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.489789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.489819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.490178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.490209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.490563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.490596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.490952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.490982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.491347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.491380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.491740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.491772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.492119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.492151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.492490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.492521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.492888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.492919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.493176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.493209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.493582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.493611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.494038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.494070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.494421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.494455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.494847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.494878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.495235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.495269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.495620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.495652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.496017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.496048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.496407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.496439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.496798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.496835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.497188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.903 [2024-11-19 09:49:39.497221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.903 qpair failed and we were unable to recover it. 00:31:52.903 [2024-11-19 09:49:39.497579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.497610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.497945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.497976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.498330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.498362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.498716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.498746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.499110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.499140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.499530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.499561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.499993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.500022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.500274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.500305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.500692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.500722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.501075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.501104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.501471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.501504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.501848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.501881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.502274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.502306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.502666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.502698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.503060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.503091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.503454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.503485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.503837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.503867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.504226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.504258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.504624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.504655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.505021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.505053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.505419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.505450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.505802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.505834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.506181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.506214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.506565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.506594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.507010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.507040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.507377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.507409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.507771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.507801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.508149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.508192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.508567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.508598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.508960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.508990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.509257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.509289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.509535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.509565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.509803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.509834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.510183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.510214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.510518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.510548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.510901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.510930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.511297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.511328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.511687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.904 [2024-11-19 09:49:39.511716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.904 qpair failed and we were unable to recover it. 00:31:52.904 [2024-11-19 09:49:39.511968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.512008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.512393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.512425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.512818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.512849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.513197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.513229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.513480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.513513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.513753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.513782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.514131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.514174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.514531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.514561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.514921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.514952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.515316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.515348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.515695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.515725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.516155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.516197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.516460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.516491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.516849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.516881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.517241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.517274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.517625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.517656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.518010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.518041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.518407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.518439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.518801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.518833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.519088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.519118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.519456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.519488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.519842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.519872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.520244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.520277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.520535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.520565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.520918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.520950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.521182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.521214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.521616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.521648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.521897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.521929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.522317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.522348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.522704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.522738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.523086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.523118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.523474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.523505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.523870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.523901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.524269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.524301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.524546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.524578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.524940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.524971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.525213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.525245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.905 [2024-11-19 09:49:39.525620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.905 [2024-11-19 09:49:39.525650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.905 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.526012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.526043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.526413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.526446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.526869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.526908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.527293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.527325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.527692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.527722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.528078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.528108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.528363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.528396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.528777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.528809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.529060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.529090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.529325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.529356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.529718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.529750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.530122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.530153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.530588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.530617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.530981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.531013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.531262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.531294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.531620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.531649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.531909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.531940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.532361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.532394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.532749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.532780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.533151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.533194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.533559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.533588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.533946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.533978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.534337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.534369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.534749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.534779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.535139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.535192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.535554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.535585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.535941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.535972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.536183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.536214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.536616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.536646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.537000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.537032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.537413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.537445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.537671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.537700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.538053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.538084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.538313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.538344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.538614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.538643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.538763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.538796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.538950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.906 [2024-11-19 09:49:39.538981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.906 qpair failed and we were unable to recover it. 00:31:52.906 [2024-11-19 09:49:39.539338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.539369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.539718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.539748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.540115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.540145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.540544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.540575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.540948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.540980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.541337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.541376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.541743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.541777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.542138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.542180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.542530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.542561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.542920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.542949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.543288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.543320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.543694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.543725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.544084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.544114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.544503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.544535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.544890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.544920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.545289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.545320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.545712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.545742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.546102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.546131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.546447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.546478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.546856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.546888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.547234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.547266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.547637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.547669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.548033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.548063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.548426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.548458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.548807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.548837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.549195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.549227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.549593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.549624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.549944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.549974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.550378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.550412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.550761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.550792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.551146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.551201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.551553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.551583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.551944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.551974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.552351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.552383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.552730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.552760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.553115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.553144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.553516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.553547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.907 [2024-11-19 09:49:39.553915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.907 [2024-11-19 09:49:39.553946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.907 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.554300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.554331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.554698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.554727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.555117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.555147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.555540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.555571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.555935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.555965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.556312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.556345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.556698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.556730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.557087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.557123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.557525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.557557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.557921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.557952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.558367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.558398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.558816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.558847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.559198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.559230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.559596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.559625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.559981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.560011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.560385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.560417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.560770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.560801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.561167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.561198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.561548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.561578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.561925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.561957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.562322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.562353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.562729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.562759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.563105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.563136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.563500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.563531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.563886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.563917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.564268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.564300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.564658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.564688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.565008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.565039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.565401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.565432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.565683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.565715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.566065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.908 [2024-11-19 09:49:39.566096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.908 qpair failed and we were unable to recover it. 00:31:52.908 [2024-11-19 09:49:39.566453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.566486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.566840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.566871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.567228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.567260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.567612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.567643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.568016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.568045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.568406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.568437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.568672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.568704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.569048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.569080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.569338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.569370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.569718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.569748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.570104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.570136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.570503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.570535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.570775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.570808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.571175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.571208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.571554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.571587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.571937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.571967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.572329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.572367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.572718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.572748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.573116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.573147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.573519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.573551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.573903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.573935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.574291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.574322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.574558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.574591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.574928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.574960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.575321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.575353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.575701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.575731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.576097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.576127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.576484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.576515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.576922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.576952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.577302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.577333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.577694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.577724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.578085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.578115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.578512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.578544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.578896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.578929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.579290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.579323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.579692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.579722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.580069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.580100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.909 [2024-11-19 09:49:39.580459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.909 [2024-11-19 09:49:39.580491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.909 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.580890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.580921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.581272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.581306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.581668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.581699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.582059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.582091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.582318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.582349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.582782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.582813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.583179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.583212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.583564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.583594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.583956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.583987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.584339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.584371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.584632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.584662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.585006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.585036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.585362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.585397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.585757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.585786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.586151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.586194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.586519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.586550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.586921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.586952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.587311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.587345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.587708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.587744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.588112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.588144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.588511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.588543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.588898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.588931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.589299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.589331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.589702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.589733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.590094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.590125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.590479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.590510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.590858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.590893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.591142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.591185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.591406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.591439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.591801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.591833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.592183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.592217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.592560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.592591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.592953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.592984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.593338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.593372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.593737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.593768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.594177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.594209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.594559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.910 [2024-11-19 09:49:39.594591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.910 qpair failed and we were unable to recover it. 00:31:52.910 [2024-11-19 09:49:39.594946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.594976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.595342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.595375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.595733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.595764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.596120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.596150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.596526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.596558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.596912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.596944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.597300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.597333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.597692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.597723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.598079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.598112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.598470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.598501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.598850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.598881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.599236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.599267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.599633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.599663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.600021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.600053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.600407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.600440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.600784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.600816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.601181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.601214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.601453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.601486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.601932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.601963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.602320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.602354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.602714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.602744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.603120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.603168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.603519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.603551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.603909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.603941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.604294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.604328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.604683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.604716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.605066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.605097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.605343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.605377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.605733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.605764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.606117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.606149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.606515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.606547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.606910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.606942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.607296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.607328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.607682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.607712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.608073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.608104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.608382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.608415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.608773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.608805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.609171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.911 [2024-11-19 09:49:39.609204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.911 qpair failed and we were unable to recover it. 00:31:52.911 [2024-11-19 09:49:39.609548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.609579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.609939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.609970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.610413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.610445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.610761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.610793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.611138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.611186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.611540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.611572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.611821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.611853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.612197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.612230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.612617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.612648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.613002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.613033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.613395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.613428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.613782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.613815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.614060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.614093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.614453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.614484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.614840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.614871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.615236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.615267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.615648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.615677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.615930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.615960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.616323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.616355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.617543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.617594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.617949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.617983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.618336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.618369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.618723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.618754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.619112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.619150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:52.912 [2024-11-19 09:49:39.619563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:52.912 [2024-11-19 09:49:39.619593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:52.912 qpair failed and we were unable to recover it. 00:31:53.184 [2024-11-19 09:49:39.619944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.184 [2024-11-19 09:49:39.619977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.184 qpair failed and we were unable to recover it. 00:31:53.184 [2024-11-19 09:49:39.620333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.184 [2024-11-19 09:49:39.620367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.184 qpair failed and we were unable to recover it. 00:31:53.184 [2024-11-19 09:49:39.620744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.184 [2024-11-19 09:49:39.620776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.184 qpair failed and we were unable to recover it. 00:31:53.184 [2024-11-19 09:49:39.624191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.184 [2024-11-19 09:49:39.624255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.184 qpair failed and we were unable to recover it. 00:31:53.184 [2024-11-19 09:49:39.624656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.184 [2024-11-19 09:49:39.624692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.184 qpair failed and we were unable to recover it. 00:31:53.184 [2024-11-19 09:49:39.625096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.184 [2024-11-19 09:49:39.625128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.184 qpair failed and we were unable to recover it. 00:31:53.184 [2024-11-19 09:49:39.625526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.184 [2024-11-19 09:49:39.625560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.184 qpair failed and we were unable to recover it. 00:31:53.184 [2024-11-19 09:49:39.625939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.184 [2024-11-19 09:49:39.625975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.184 qpair failed and we were unable to recover it. 00:31:53.184 [2024-11-19 09:49:39.626319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.184 [2024-11-19 09:49:39.626349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.184 qpair failed and we were unable to recover it. 00:31:53.184 [2024-11-19 09:49:39.626732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.184 [2024-11-19 09:49:39.626761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.184 qpair failed and we were unable to recover it. 00:31:53.184 [2024-11-19 09:49:39.627126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.184 [2024-11-19 09:49:39.627174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.184 qpair failed and we were unable to recover it. 00:31:53.184 [2024-11-19 09:49:39.627563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.184 [2024-11-19 09:49:39.627594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.184 qpair failed and we were unable to recover it. 00:31:53.184 [2024-11-19 09:49:39.627986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.184 [2024-11-19 09:49:39.628017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.184 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.628389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.628422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.628672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.628701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.629091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.629123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.629392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.629424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.629798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.629828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.630187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.630219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.630586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.630616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.630982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.631011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.631411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.631442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.631837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.631872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.632156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.632206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.632587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.632620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.633034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.633076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.633436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.633473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.633888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.633929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.637184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.637231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.637640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.637667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.638044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.638068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.638402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.638426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.638800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.638822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.639203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.639232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.639596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.639620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.639993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.640016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.640321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.640344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.640714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.640737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.641069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.641099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.641450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.641473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.641815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.641839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.642257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.642280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.642636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.642659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.642995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.643016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.643339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.643360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.643599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.643620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.643993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.644015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.644385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.644407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.644767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.644789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.185 [2024-11-19 09:49:39.645135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.185 [2024-11-19 09:49:39.645167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.185 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.645471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.645493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.645860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.645882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.646258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.646283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.646617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.646638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.646853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.646874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.647215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.647237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.647583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.647606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.647940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.647968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.648335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.648365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.648715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.648742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.649113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.649142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.649500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.649528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.649889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.649919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.650263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.650293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.650658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.650687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.651054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.651083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.651480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.651510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.651868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.651896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.652266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.652295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.652546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.652573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.652911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.652940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.653307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.653337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.653696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.653723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.654083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.654111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.654483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.654513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.654868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.654894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.655258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.655286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.655657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.655685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.655937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.655969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.656332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.656360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.656714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.656744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.657095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.657123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.657506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.657534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.657892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.657919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.658277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.658305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.658667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.658697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.658990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.186 [2024-11-19 09:49:39.659020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.186 qpair failed and we were unable to recover it. 00:31:53.186 [2024-11-19 09:49:39.659391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.659427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.659798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.659831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.660185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.660217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.660566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.660598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.660961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.660993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.661267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.661298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.661671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.661701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.661932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.661962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.662310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.662341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.662693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.662725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.663071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.663102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.663498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.663530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.663778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.663812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.664173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.664206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.664548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.664579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.664969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.665002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.665249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.665281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.665640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.665670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.666028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.666067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.666423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.666455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.666811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.666841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.667186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.667218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.667580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.667610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.667972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.668002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.668416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.668449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.668813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.668846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.669190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.669223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.669580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.669614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.670004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.670035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.670466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.670498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.670844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.670875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.671232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.671264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.671649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.671679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.672075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.672106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.672491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.672522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.672881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.672911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.673276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.673309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.187 [2024-11-19 09:49:39.673450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.187 [2024-11-19 09:49:39.673482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.187 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.673886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.673917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.674273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.674304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.674671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.674701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.675054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.675088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.675444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.675475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.675841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.675870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.676229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.676260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.676617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.676648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.676994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.677024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.677381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.677412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.677665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.677695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.678050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.678080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.678443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.678475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.678838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.678868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.679225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.679257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.679621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.679652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.679883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.679914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.680273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.680306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.680692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.680725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.681084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.681115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.681471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.681509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.681863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.681895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.682245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.682278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.682635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.682665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.683021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.683051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.683421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.683452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.683891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.683922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.684282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.684313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.684558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.684588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.684972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.685004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.685377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.685410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.685771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.685801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.686172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.686204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.686562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.686592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.686937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.686967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.687330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.687362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.687728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.188 [2024-11-19 09:49:39.687760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.188 qpair failed and we were unable to recover it. 00:31:53.188 [2024-11-19 09:49:39.688124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.688154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.688540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.688573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.688930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.688960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.689318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.689349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.689710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.689740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.690098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.690127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.690505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.690536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.690892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.690924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.691283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.691316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.691676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.691706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.692069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.692100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.692463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.692496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.692737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.692768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.693136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.693177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.693579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.693611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.693950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.693981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.694333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.694365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.694736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.694768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.695134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.695187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.695566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.695597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.695949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.695981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.696349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.696383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.696736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.696766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.697124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.697168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.697544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.697576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.697939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.697971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.698331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.698364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.698729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.698759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.699112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.699143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.699525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.699556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.699925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.699955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.189 [2024-11-19 09:49:39.700211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.189 [2024-11-19 09:49:39.700248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.189 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.700595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.700627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.700986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.701017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.701381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.701413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.701808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.701838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.702199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.702230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.702616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.702648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.703018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.703048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.703410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.703441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.703781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.703811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.704172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.704205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.704569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.704602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.704967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.704998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.705384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.705416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.705763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.705792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.706155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.706195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.706420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.706449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.706813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.706843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.707280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.707314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.707678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.707709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.708050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.708080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.708446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.708478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.708847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.708879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.709244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.709277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.709633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.709663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.710019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.710049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.710417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.710451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.710808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.710838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.711201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.711233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.711611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.711642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.711999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.712029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.712402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.712432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.712789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.712831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.713207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.713239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.713620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.713650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.714017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.714047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.714285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.714318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.714699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.714731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.190 qpair failed and we were unable to recover it. 00:31:53.190 [2024-11-19 09:49:39.714973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.190 [2024-11-19 09:49:39.715008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.715366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.715399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.715754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.715785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.716145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.716185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.716542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.716572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.716931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.716962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.717312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.717343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.717572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.717602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.717859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.717889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.718248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.718278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.718634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.718666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.719013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.719045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.719376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.719407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.719651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.719684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.720028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.720059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.720419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.720452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.720812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.720842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.721211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.721242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.721494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.721523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.721913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.721943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.722316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.722349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.722722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.722753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.723109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.723139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.723542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.723574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.723937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.723967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.724334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.724365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.724724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.724755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.725120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.725152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.725295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.725328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.725692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.725723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.726081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.726111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.726549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.726580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.726934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.726964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.727333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.727365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.727740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.727779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.728132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.728172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.728534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.728564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.728920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.191 [2024-11-19 09:49:39.728950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.191 qpair failed and we were unable to recover it. 00:31:53.191 [2024-11-19 09:49:39.729315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.729346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.729708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.729738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.730129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.730168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.730523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.730554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.730912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.730942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.731327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.731361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.731707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.731736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.732097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.732126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.732509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.732545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.732893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.732923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.733284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.733315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.733560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.733591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.733822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.733854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.734206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.734237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.734620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.734650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.735013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.735045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.735410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.735443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.735808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.735837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.736280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.736311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.736671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.736701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.737059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.737089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.737455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.737486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.737854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.737885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.738243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.738274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.738527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.738556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.738881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.738911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.739156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.739202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.739432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.739463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.739829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.739860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.740259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.740291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.740651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.740682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.741046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.741078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.741436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.741467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.741838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.741870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.742205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.742236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.742634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.742666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.743021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.192 [2024-11-19 09:49:39.743058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.192 qpair failed and we were unable to recover it. 00:31:53.192 [2024-11-19 09:49:39.743452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.743484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.743835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.743867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.744223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.744255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.744637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.744669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.745040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.745071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.745440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.745472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.745831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.745862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.746224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.746256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.746599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.746631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.746993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.747024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.747394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.747426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.747779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.747811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.748190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.748222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.748484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.748515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.748862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.748893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.749287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.749319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.749727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.749759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.750022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.750052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.750414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.750447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.750706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.750738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.751089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.751120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.751502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.751535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.751914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.751944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.752377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.752409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.752664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.752693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.753051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.753081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.753504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.753537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.753892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.753925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.754309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.754340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.754697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.754727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.755022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.755051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.755390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.755423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.755794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.193 [2024-11-19 09:49:39.755824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.193 qpair failed and we were unable to recover it. 00:31:53.193 [2024-11-19 09:49:39.756183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.756217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.756575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.756605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.756844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.756876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.757125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.757155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.757507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.757538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.757904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.757934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.758325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.758362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.758720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.758750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.758997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.759029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.759266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.759300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.759665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.759696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.760075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.760106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.760366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.760398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.760755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.760787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.761153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.761196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.761550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.761580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.761943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.761974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.762339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.762370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.762746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.762775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.763137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.763178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.763426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.763459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.763784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.763815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.764048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.764079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.764459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.764491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.764853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.764883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.765261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.765293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.765663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.765695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.766006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.766036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.766395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.766426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.766787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.766818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.767189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.767220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.767575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.767607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.767970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.768000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.768266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.768298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.768671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.768700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.769109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.769140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.769539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.769570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.194 [2024-11-19 09:49:39.769960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.194 [2024-11-19 09:49:39.769991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.194 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.770339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.770370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.770616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.770646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.771007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.771038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.771288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.771320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.771697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.771729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.772096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.772127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.772522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.772553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.772981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.773013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.773376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.773413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.773759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.773789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.774154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.774199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.774468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.774499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.774853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.774882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.775255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.775287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.775643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.775675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.776034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.776065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.776433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.776465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.776837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.776867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.777112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.777142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.777389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.777420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.777788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.777819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.778181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.778212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.778572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.778604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.778976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.779006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.779382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.779416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.779787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.779818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.780064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.780097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.780455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.780488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.780840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.780870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.781232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.781266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.781625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.781656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.782015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.782045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.782403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.782437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.782806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.782835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.783091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.783120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.783491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.783524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.783915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.195 [2024-11-19 09:49:39.783945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.195 qpair failed and we were unable to recover it. 00:31:53.195 [2024-11-19 09:49:39.784266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.784298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.784666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.784697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.784923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.784954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.785307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.785338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.785776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.785805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.786049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.786078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.786435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.786467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.786829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.786860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.787227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.787257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.787637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.787668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.788028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.788060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.788418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.788462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.788696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.788727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.789131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.789172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.789534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.789565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.789806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.789836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.790090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.790119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.790494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.790527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.790918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.790952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.791321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.791353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.791717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.791750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.792115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.792145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.792503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.792533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.792900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.792931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.793307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.793340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.793727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.793758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.794128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.794167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.794536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.794568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.794818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.794850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.795101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.795133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.795536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.795567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.795928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.795960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.796203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.796235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.796479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.796508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.796864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.796894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.797266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.797299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.797548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.797577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.196 [2024-11-19 09:49:39.797827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.196 [2024-11-19 09:49:39.797859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.196 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.798296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.798329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.798731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.798761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.799124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.799155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.799550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.799582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.799692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.799719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.799967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.799996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.800294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.800325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.800699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.800729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.801091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.801121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.801505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.801537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.801892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.801923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.802283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.802316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.802688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.802717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.802962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.803002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.803275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.803307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.803663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.803694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.803946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.803979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.804324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.804355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.804713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.804743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.805109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.805139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.805267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.805297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.805635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.805666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.806025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.806054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.806393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.806424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.806781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.806814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.807190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.807222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.807575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.807605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.807965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.807995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.808374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.808405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.808653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.808686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.809064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.809094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.809432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.809464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.809714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.809744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.810125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.810156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.810501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.810533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.810892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.197 [2024-11-19 09:49:39.810923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.197 qpair failed and we were unable to recover it. 00:31:53.197 [2024-11-19 09:49:39.811150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.811190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.811562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.811593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.811956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.811985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.812333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.812365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.812604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.812637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.812992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.813026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.813281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.813312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.813676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.813706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.813932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.813963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.814323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.814353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.814712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.814742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.814989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.815019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.815352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.815383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.815738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.815769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.816124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.816155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.816532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.816565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.816912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.816943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.817189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.817231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.817577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.817608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.817856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.817887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.818309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.818342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.818711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.818743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.819107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.819139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.819535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.819569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.819933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.819965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.820216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.820249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.820597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.820629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.820984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.821015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.821384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.821416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.821784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.821815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.822215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.822248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.822617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.822648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.823002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.198 [2024-11-19 09:49:39.823033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.198 qpair failed and we were unable to recover it. 00:31:53.198 [2024-11-19 09:49:39.823475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.823509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.823859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.823890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.824289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.824321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.824715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.824746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.825095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.825127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.825488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.825520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.825910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.825942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.826301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.826335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.826696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.826727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.827095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.827126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.827503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.827534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.827777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.827809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.828148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.828191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.828543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.828573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.828953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.828983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.829333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.829365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.829602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.829631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.829997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.830028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.830383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.830415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.830768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.830797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.831168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.831198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.831548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.831578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.832010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.832040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.832391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.832421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.832776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.832812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.833183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.833218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.833621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.833652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.834008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.834039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.834414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.834447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.834806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.834836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.835200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.835231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.835584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.835614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.835970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.836001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.836339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.836371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.199 [2024-11-19 09:49:39.836743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.199 [2024-11-19 09:49:39.836773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.199 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.837121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.837152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.837394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.837429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.837789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.837819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.838190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.838222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.838583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.838613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.838966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.838997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.839419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.839450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.839803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.839833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.840191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.840223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.840584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.840614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.841047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.841077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.841455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.841488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.841722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.841755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.842153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.842196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.842563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.842593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.842950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.842979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.843354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.843387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.843634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.843666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.844024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.844054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.844394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.844425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.844853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.844883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.845230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.845261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.845493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.845523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.845720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.845749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.846124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.846157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.846549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.846579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.846944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.846973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.847336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.847368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.847726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.847756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.848118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.848153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.848576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.848610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.848941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.848971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.849333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.849364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.849724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.849753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.850191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.850224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.850463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.850499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.200 [2024-11-19 09:49:39.850856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.200 [2024-11-19 09:49:39.850886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.200 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.851239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.851271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.851617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.851647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.851996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.852026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.852396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.852427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.852660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.852694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.853048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.853078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.853442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.853474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.853832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.853863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.854065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.854095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.854454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.854485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.854837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.854868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.855229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.855261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.855649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.855678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.856032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.856062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.856267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.856301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.856663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.856693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.857079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.857112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.857507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.857538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.857887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.857918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.858274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.858312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.858663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.858693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.858937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.858967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.859319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.859352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.859706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.859737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.860102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.860133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.860501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.860531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.860888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.860918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.861315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.861346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.861571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.861602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.861982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.862012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.862377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.862408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.862755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.862785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.863145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.863188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.863465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.863496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.863724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.863753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.863966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.863997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.201 [2024-11-19 09:49:39.864346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.201 [2024-11-19 09:49:39.864378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.201 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.864755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.864784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.865066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.865096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.865513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.865545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.865898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.865928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.866296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.866327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.866562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.866592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.866961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.866991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.867335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.867367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.867730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.867761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.868115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.868146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.868517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.868548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.868913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.868944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.869316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.869347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.869699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.869729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.870086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.870115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.870513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.870545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.870764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.870794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.871173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.871204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.871598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.871628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.871979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.872009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.872378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.872409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.872776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.872809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.873173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.873210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.873551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.873584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.873939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.873969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.874335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.874366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.874714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.874745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.875100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.875132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.875499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.875531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.875880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.875911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.876274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.876304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.876675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.876707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.877063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.877095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.877320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.877351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.877718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.877748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.878098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.878129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.878381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.202 [2024-11-19 09:49:39.878415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.202 qpair failed and we were unable to recover it. 00:31:53.202 [2024-11-19 09:49:39.878765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.878795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.879178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.879211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.879555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.879586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.879943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.879974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.880331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.880362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.880719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.880750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.880993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.881023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.881386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.881416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.881774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.881804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.882194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.882226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.882580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.882612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.882841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.882870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.883241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.883274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.883623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.883652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.884010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.884039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.884401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.884432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.884786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.884815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.885174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.885205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.885557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.885587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.885960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.885991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.886234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.886266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.886628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.886658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.886950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.886981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.887342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.887373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.887723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.887753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.887982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.888016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.888406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.888438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.888808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.888837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.889202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.889233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.889586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.889616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.889963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.889993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.890373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.203 [2024-11-19 09:49:39.890404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.203 qpair failed and we were unable to recover it. 00:31:53.203 [2024-11-19 09:49:39.890607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.890637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.890996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.891025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.891391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.891422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.891776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.891805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.892176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.892208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.892567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.892600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.892963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.892992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.893424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.893458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.893804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.893835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.894197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.894228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.894582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.894612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.894970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.895000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.895334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.895366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.895735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.895767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.896137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.896176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.896536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.896565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.896918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.896948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.897308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.897340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.897702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.897733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.898088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.898119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.898519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.898551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.898920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.898952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.899321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.899353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.899703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.899733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.900093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.900125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.900413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.900444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.900791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.900822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.901074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.901106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.901392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.901425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.901782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.901814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.902183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.902216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.902563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.902593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.902946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.902976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.903211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.903251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.903606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.903636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.903994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.904023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.904403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.204 [2024-11-19 09:49:39.904434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.204 qpair failed and we were unable to recover it. 00:31:53.204 [2024-11-19 09:49:39.904796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.904825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.905185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.905216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.905590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.905622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.905981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.906011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.906387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.906419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.906851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.906883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.907234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.907268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.907627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.907659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.908033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.908064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.908431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.908462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.908825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.908856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.909212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.909244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.909641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.909672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.910019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.910051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.910322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.910353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.910740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.910769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.911124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.911154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.911523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.911553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.911950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.911980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.912344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.912376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.912607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.912636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.912887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.912916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.913361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.913393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.913743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.913774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.914132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.914180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.914563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.914595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.914939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.914970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.915333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.915364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.915703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.915735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.916087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.916118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.916475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.916508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.916885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.916916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.917273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.917304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.917660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.917690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.205 [2024-11-19 09:49:39.918060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.205 [2024-11-19 09:49:39.918091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.205 qpair failed and we were unable to recover it. 00:31:53.480 [2024-11-19 09:49:39.918450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.480 [2024-11-19 09:49:39.918484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.480 qpair failed and we were unable to recover it. 00:31:53.480 [2024-11-19 09:49:39.918847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.480 [2024-11-19 09:49:39.918882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.480 qpair failed and we were unable to recover it. 00:31:53.480 [2024-11-19 09:49:39.919230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.480 [2024-11-19 09:49:39.919263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.480 qpair failed and we were unable to recover it. 00:31:53.480 [2024-11-19 09:49:39.919497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.480 [2024-11-19 09:49:39.919526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.480 qpair failed and we were unable to recover it. 00:31:53.480 [2024-11-19 09:49:39.919874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.480 [2024-11-19 09:49:39.919905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.480 qpair failed and we were unable to recover it. 00:31:53.480 [2024-11-19 09:49:39.920272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.480 [2024-11-19 09:49:39.920304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.480 qpair failed and we were unable to recover it. 00:31:53.480 [2024-11-19 09:49:39.920658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.480 [2024-11-19 09:49:39.920691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.480 qpair failed and we were unable to recover it. 00:31:53.480 [2024-11-19 09:49:39.921048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.480 [2024-11-19 09:49:39.921078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.480 qpair failed and we were unable to recover it. 00:31:53.480 [2024-11-19 09:49:39.921441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.480 [2024-11-19 09:49:39.921474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.480 qpair failed and we were unable to recover it. 00:31:53.480 [2024-11-19 09:49:39.921836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.480 [2024-11-19 09:49:39.921867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.480 qpair failed and we were unable to recover it. 00:31:53.480 [2024-11-19 09:49:39.922224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.480 [2024-11-19 09:49:39.922256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.480 qpair failed and we were unable to recover it. 00:31:53.480 [2024-11-19 09:49:39.922619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.922651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.923009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.923040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.923404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.923436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.923790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.923823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.924177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.924211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.924554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.924585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.924948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.924980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.925331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.925364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.925728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.925759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.926114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.926146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.926501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.926533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.926879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.926909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.927242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.927273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.927624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.927657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.928004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.928034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.928371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.928403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.928759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.928788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.929146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.929189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.929619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.929651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.930020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.930051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.930413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.930447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.930678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.930711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.931055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.931088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.931453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.931486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.931846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.931876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.932242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.932274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.932673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.932704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.933096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.933127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.933524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.933556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.933767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.933799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.934170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.934209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.934552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.934585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.934944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.934974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.935300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.935332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.935675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.935704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.936061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.936090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.936453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.936484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.481 [2024-11-19 09:49:39.936841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.481 [2024-11-19 09:49:39.936871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.481 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.937239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.937270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.937676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.937706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.938069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.938099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.938460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.938492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.938841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.938871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.939082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.939112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.939463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.939497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.939852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.939882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.940125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.940154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.940526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.940557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.940918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.940947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.941318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.941351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.941712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.941742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.942096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.942126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.942507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.942540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.942900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.942931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.943281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.943314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.943573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.943606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.943951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.943980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.944338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.944370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.944691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.944721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.944965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.944996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.945336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.945369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.945709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.945739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.946089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.946119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.946479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.946511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.946863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.946893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.947251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.947281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.947638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.947669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.947893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.947923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.948285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.948317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.948677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.948708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.949064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.949101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.949516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.949550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.949892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.949924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.950282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.950314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.950678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.482 [2024-11-19 09:49:39.950708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.482 qpair failed and we were unable to recover it. 00:31:53.482 [2024-11-19 09:49:39.951069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.951098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.951461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.951494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.951872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.951903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.952257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.952291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.952630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.952660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.953027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.953058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.953412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.953445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.953841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.953872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.954237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.954268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.954633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.954665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.954963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.954993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.955348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.955382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.955600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.955629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.955966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.955998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.956338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.956369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.956706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.956735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.957093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.957124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.957407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.957441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.957768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.957798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.958156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.958201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.958542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.958573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.958934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.958964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.959316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.959350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.959702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.959732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.960090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.960120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.960494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.960526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.960876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.960907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.961268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.961300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.961652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.961681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.961923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.961956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.962315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.962346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.962703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.962733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.963093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.963124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.963494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.963527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.963881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.963911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.964277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.964315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.964531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.964560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.483 [2024-11-19 09:49:39.964802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.483 [2024-11-19 09:49:39.964832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.483 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.965191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.965222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.965572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.965604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.965969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.965999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.966336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.966367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.966732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.966763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.967182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.967214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.967563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.967593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.967956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.967986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.968335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.968367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.968733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.968764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.968944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.968977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.969336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.969369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.969722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.969752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.970101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.970131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.970469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.970500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.970852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.970881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.971133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.971179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.971555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.971586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.971949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.971981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.972334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.972366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.972728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.972758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.973127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.973157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.973524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.973554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.973903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.973932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.974296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.974328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.974689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.974719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.975079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.975111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.975473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.975505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.975851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.975883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.976243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.976276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.976658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.976688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.977119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.977149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.977513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.977544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.977905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.977937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.978304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.978336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.978692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.978722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.979076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.484 [2024-11-19 09:49:39.979106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.484 qpair failed and we were unable to recover it. 00:31:53.484 [2024-11-19 09:49:39.979466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.979505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.979864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.979896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.980258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.980290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.980644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.980674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.981032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.981062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.981425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.981456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.981811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.981840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.982198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.982230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.982424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.982453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.982767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.982798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.983176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.983209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.983557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.983588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.983946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.983975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.984336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.984369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.984767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.984799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.985157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.985199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.985562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.985592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.985949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.985979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.986198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.986231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.986621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.986652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.987058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.987091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 527918 Killed "${NVMF_APP[@]}" "$@" 00:31:53.485 [2024-11-19 09:49:39.987354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.987386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.987732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.987764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.988107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.988139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 09:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:31:53.485 [2024-11-19 09:49:39.988523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.988555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 09:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 09:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:53.485 [2024-11-19 09:49:39.988943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.988981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 09:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:53.485 [2024-11-19 09:49:39.989329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.989361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 09:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.989730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.989760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.990120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.990151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.485 [2024-11-19 09:49:39.990504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.485 [2024-11-19 09:49:39.990537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.485 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.990935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.990966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.991194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.991224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.991601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.991632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.991999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.992029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.992452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.992484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.992832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.992862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.993224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.993256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.993631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.993663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.994022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.994056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.994427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.994458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.994886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.994917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.995053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.995084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.995351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.995383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.995737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.995769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.996136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.996178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.996542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.996572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.996953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.996985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.997227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.997259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 09:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=528835 00:31:53.486 [2024-11-19 09:49:39.997589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.997620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.997854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 09:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 528835 00:31:53.486 [2024-11-19 09:49:39.997884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.998126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.998173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 09:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:53.486 09:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 528835 ']' 00:31:53.486 [2024-11-19 09:49:39.998435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.998468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 09:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.486 09:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:53.486 [2024-11-19 09:49:39.998820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.998851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 09:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.486 [2024-11-19 09:49:39.999204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.999238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 09:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:53.486 [2024-11-19 09:49:39.999482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.999514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 09:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:39.999870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:39.999902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:40.000228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:40.000263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:40.000627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:40.000661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:40.000820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:40.000852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:40.001210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:40.001244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:40.001653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:40.001686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:40.001928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:40.001964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:40.002259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.486 [2024-11-19 09:49:40.002292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.486 qpair failed and we were unable to recover it. 00:31:53.486 [2024-11-19 09:49:40.002547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.002580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.002821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.002854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.003195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.003229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.007196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.007277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.007653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.007688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.008006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.008042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.008391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.008427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.008589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.008626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.008970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.009005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.009206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.009239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.009543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.009580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.009992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.010030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.010296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.010336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.010674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.010713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.011069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.011107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.011411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.011448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.011808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.011842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.012232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.012269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.012651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.012684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.013054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.013090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.013329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.013361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.013790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.013824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.014228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.014262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.014657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.014704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.015087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.015124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.015400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.015435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.017210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.017264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.017688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.017748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.018128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.018206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.018531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.018585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.018887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.018946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.019269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.019327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.019597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.019648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.020019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.487 [2024-11-19 09:49:40.020073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.487 qpair failed and we were unable to recover it. 00:31:53.487 [2024-11-19 09:49:40.020308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.020371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.020653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.020710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.021081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.021138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.021600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.021647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.022038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.022080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.022442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.022490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.022849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.022898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.023176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.023218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.023616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.023664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.024014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.024055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.024402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.024448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.024807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.024855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.025117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.025208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.025470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.025515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.025873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.025922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.026288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.026337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 Write completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Read completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Write completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Read completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Write completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Write completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Read completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Read completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Write completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Write completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Read completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Read completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Write completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Write completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Read completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Write completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Read completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Write completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Read completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Read completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Read completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Read completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Read completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Write completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Write completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Read completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Write completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Write completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Write completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Read completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Read completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 Write completed with error (sct=0, sc=8) 00:31:53.488 starting I/O failed 00:31:53.488 [2024-11-19 09:49:40.027000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:53.488 [2024-11-19 09:49:40.027473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.027574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.027858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.027886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.028030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.028056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.028476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.028568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.028984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.029014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.029416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.029443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.029803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.029828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.030134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.030171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.030452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.030477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.030740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.030765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.031101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.488 [2024-11-19 09:49:40.031125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.488 qpair failed and we were unable to recover it. 00:31:53.488 [2024-11-19 09:49:40.031381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.031408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.031771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.031797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.032183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.032209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.032606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.032630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.032942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.032966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.033070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.033092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.033412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.033436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.033643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.033664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.033793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.033815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.034045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.034072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.034318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.034340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.034676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.034699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.035057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.035078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.035297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.035318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.035544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.035568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.035901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.035922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.036127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.036148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.036551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.036574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.036950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.036974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.037202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.037224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.037339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.037359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.037607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.037628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.037944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.037965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.038332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.038357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.038597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.038619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.038944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.038966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.039337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.039360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.039766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.039787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.040028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.040051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.040450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.040474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.040833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.040856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.041115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.041136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.041469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.041492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.041800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.041821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.042177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.042199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.042561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.042584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.489 qpair failed and we were unable to recover it. 00:31:53.489 [2024-11-19 09:49:40.042820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.489 [2024-11-19 09:49:40.042849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.043221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.043244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.043588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.043611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.043971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.043993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.044344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.044378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.044649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.044679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.044928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.044958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.045192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.045227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.045559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.045590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.045950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.045982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.046248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.046282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.046678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.046710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.046859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.046888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.047234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.047265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.047628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.047660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.047916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.047947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.048308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.048341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.048721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.048752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.049133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.049185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.049423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.049456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.049675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.049704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.049967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.049998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.050387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.050422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.050792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.050827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.051067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.051097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.051466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.051498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.051736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.051768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.052123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.052155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.052503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.052536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.052774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.052805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.053190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.053224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.053591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.053621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.053857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.053887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.054240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.054275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.490 [2024-11-19 09:49:40.054639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.490 [2024-11-19 09:49:40.054669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.490 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.054932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.054961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.055366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.055400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.055753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.055789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.056034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.056064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.056308] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:31:53.491 [2024-11-19 09:49:40.056371] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:53.491 [2024-11-19 09:49:40.056433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.056466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.056795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.056825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.057208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.057239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.057602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.057635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.057910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.057941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.058351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.058385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.058775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.058808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.059192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.059224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.059483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.059513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.059886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.059917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.060282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.060315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.060708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.060740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.061134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.061181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.061580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.061613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.061878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.061911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.062215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.062250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.062509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.062543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.062853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.062884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.063293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.063327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.063600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.063639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.064012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.064041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.064356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.064390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.064750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.064780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.065227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.065258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.065640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.065670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.065995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.066024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.066484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.066514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.066742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.066770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.066975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.067012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.067382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.067414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.067799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.067829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.068217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.068250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.068628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.491 [2024-11-19 09:49:40.068658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.491 qpair failed and we were unable to recover it. 00:31:53.491 [2024-11-19 09:49:40.069015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.069050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.069465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.069496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.069865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.069894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.070279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.070312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.070656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.070685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.071018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.071050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.071355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.071385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.071719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.071749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.071972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.072004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.072369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.072400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.072753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.072783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.073138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.073176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.073564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.073595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.073971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.074001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.074361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.074391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.074764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.074794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.075118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.075148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.075528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.075557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.075927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.075957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.076210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.076240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.076614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.076644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.077004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.077033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.077413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.077451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.077809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.077837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.078180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.078211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.078342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.078375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.078752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.078786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.079183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.079214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.079582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.079611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.079994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.080026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.080385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.080417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.080679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.080707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.080993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.081026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.081374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.081406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.081776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.081804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.082202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.082234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.082491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.082522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.082848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.492 [2024-11-19 09:49:40.082878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.492 qpair failed and we were unable to recover it. 00:31:53.492 [2024-11-19 09:49:40.083251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.083282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.083523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.083554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.083903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.083935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.084217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.084249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.084633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.084664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.085037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.085069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.085428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.085459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.085741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.085769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.086092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.086123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.086373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.086408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.086867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.086896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.087132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.087169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.087478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.087509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.087868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.087896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.088247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.088279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.088510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.088538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.088904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.088935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.089292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.089323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.089719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.089750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.090141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.090180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.090407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.090436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.090748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.090778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.091134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.091174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.091527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.091555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.091919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.091948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.092192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.092230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.092619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.092649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.092907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.092935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.093303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.093334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.093709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.093739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.094081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.094117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.094472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.094503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.094643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.094675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.095009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.095040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.095430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.095461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.095834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.095862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.096195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.096231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.096516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.096549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.493 [2024-11-19 09:49:40.096896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.493 [2024-11-19 09:49:40.096925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.493 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.097253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.097286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.097551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.097581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.097899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.097937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.098234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.098265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.098690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.098721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.099091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.099121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.099559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.099590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.099983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.100014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.100403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.100435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.100796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.100827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.101196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.101228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.105192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.105264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.105573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.105616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.106044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.106074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.106501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.106533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.106896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.106929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.107307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.107339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.107693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.107729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.108027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.108062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.108408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.108447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.108704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.108736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.109101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.109131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.109293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.109323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.109675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.109707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.109980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.110011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.110370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.110404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.110764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.110794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.111082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.111112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.111490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.111523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.111895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.111925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.112294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.112330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.112724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.112758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.113116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.113147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.113575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.494 [2024-11-19 09:49:40.113603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.494 qpair failed and we were unable to recover it. 00:31:53.494 [2024-11-19 09:49:40.113989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.114011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.114304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.114328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.114627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.114651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.115027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.115052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.115460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.115484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.115854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.115878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.116245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.116275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.116621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.116644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.119183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.119243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.119522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.119558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.119964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.119988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.120345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.120370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.120718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.120742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.120982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.121006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.121341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.121368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.121604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.121627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.121926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.121951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.122289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.122314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.122641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.122668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.123001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.123027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.123215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.123246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.123554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.123576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.123919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.123941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.124317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.124340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.124718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.124740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.125096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.125116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.125435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.125451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.125659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.125675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.126033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.126052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.126368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.126385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.126626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.126647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.126971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.126990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.127256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.127277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.128190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.128221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.128594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.128612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.128796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.128813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.129218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.129241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.129580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.495 [2024-11-19 09:49:40.129604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.495 qpair failed and we were unable to recover it. 00:31:53.495 [2024-11-19 09:49:40.129935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.129961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.130293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.130310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.130565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.130582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.130982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.131000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.132177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.132216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.132542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.132560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.132932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.132951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.133204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.133221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.133557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.133576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.133944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.133966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.134207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.134232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.134494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.134513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.134857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.134874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.135181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.135201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.135571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.135589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.136190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.136219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.136565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.136581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.136827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.136838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.137209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.137226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.137539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.137557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.140175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.140215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.140577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.140591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.140945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.140967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.141330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.141353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.141739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.141755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.142080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.142096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.142422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.142436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.142652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.142669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.143028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.143046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.143359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.143374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.143742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.143764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.144109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.144126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.144389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.144403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.144771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.144787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.144912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:53.496 [2024-11-19 09:49:40.145191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.145208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.145529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.145547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.145888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.145912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.146259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.496 [2024-11-19 09:49:40.146273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.496 qpair failed and we were unable to recover it. 00:31:53.496 [2024-11-19 09:49:40.146639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.146657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.146991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.147005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.147241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.147257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.147531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.147549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.147767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.147785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.148166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.148182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.148551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.148568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.148944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.148957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.149303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.149324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.149689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.149708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.150050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.150063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.150354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.150368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.150711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.150727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.151083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.151144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.151315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.151329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.151693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.151709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.155178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.155220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.155468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.155482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.155852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.155873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.156242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.156262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.156638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.156651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.156968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.156985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.157295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.157312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.157615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.157628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.157946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.157968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.158349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.158363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.158715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.158737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.158939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.158949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.159304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.159317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.159643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.159654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.159962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.159973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.160273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.160286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.160616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.160628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.160973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.160985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.161215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.161226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.161559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.161570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.161907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.161918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.162238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.162250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.497 [2024-11-19 09:49:40.162578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.497 [2024-11-19 09:49:40.162589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.497 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.162912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.162939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.163278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.163289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.163591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.163602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.163903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.163914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.164133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.164164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.164474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.164484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.164756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.164767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.165179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.165192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.165413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.165424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.165775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.165786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.166128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.166139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.166472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.166484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.166671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.166682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.167093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.167104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.167470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.167483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.167788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.167798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.168107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.168118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.168314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.168325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.168703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.168715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.168904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.168916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.169270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.169281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.169724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.169734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.170051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.170062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.170370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.170383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.170626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.170637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.171000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.171011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.171235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.171246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.171507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.171521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.171889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.171900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.172265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.172277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.172590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.172600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.172933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.172945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.173175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.173187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.173560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.173571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.173907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.173918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.174104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.174116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.174452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.174463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.498 [2024-11-19 09:49:40.174768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.498 [2024-11-19 09:49:40.174779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.498 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.174963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.174974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.175173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.175184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.175577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.175589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.175787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.175799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.176153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.176172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.176482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.176493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.176706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.176716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.176949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.176960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.177344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.177356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.177699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.177709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.178073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.178085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.178288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.178299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.178643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.178654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.178963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.178974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.179315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.179326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.179664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.179675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.180008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.180020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.180346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.180358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.180662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.180673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.180866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.180877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.181208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.181219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.181570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.181581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.181652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.181664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.181983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.181996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.182312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.182324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.182637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.182647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.182964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.182975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.183335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.183346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.183554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.183564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.183958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.183968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.184301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.184315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.184538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.184551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.184840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.184853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.185163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.499 [2024-11-19 09:49:40.185176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.499 qpair failed and we were unable to recover it. 00:31:53.499 [2024-11-19 09:49:40.185541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.185554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.185782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.185798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.186154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.186171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.186530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.186543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.186850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.186864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.187199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.187213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.187423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.187436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.187752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.187765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.188001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.188014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.188261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.188274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.188462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.188476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.188833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.188847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.189005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.189019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.189331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.189346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.190096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.190134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.190478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.190498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.190754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.190768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.191112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.191127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.191403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.191417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.191835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.191848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.192141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.192154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.192541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.192554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.192867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.192880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.193228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.193242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.193481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.193494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.193722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.193738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.193966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.193978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.194369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.194383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.194609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.194621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.194850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.194863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.195202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.195216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.195527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.195540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.195870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.195882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.196219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.196234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.196431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.196449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.196804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.196821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.197122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.197139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.197525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.197543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.500 [2024-11-19 09:49:40.197871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.500 [2024-11-19 09:49:40.197888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.500 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.198239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.198260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.198639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.198657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.199009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.199026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.199387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.199406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.199611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.199629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.199943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.199960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.200299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.200318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.200653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.200670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.200979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.200996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.201316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.201334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.201652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:53.501 [2024-11-19 09:49:40.201692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:53.501 [2024-11-19 09:49:40.201701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:53.501 [2024-11-19 09:49:40.201708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:53.501 [2024-11-19 09:49:40.201706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.201723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:53.501 [2024-11-19 09:49:40.201731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.201997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.202014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.202331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.202349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.202712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.202730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.202932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.202961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.203187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.203205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.203429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.203454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.203767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.203784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.203776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:53.501 [2024-11-19 09:49:40.204013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:53.501 [2024-11-19 09:49:40.204141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.204169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.204200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:53.501 [2024-11-19 09:49:40.204247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:53.501 [2024-11-19 09:49:40.204476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.204494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.204851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.204869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.205199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.205218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.205564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.205582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.205920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.205937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.206284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.206302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.206638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.206655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.206963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.206980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.207313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.207331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.207685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.207703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.208056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.208073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.208280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.208299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.208662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.208684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.209023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.209056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.209386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.209409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.501 [2024-11-19 09:49:40.209766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.501 [2024-11-19 09:49:40.209789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.501 qpair failed and we were unable to recover it. 00:31:53.502 [2024-11-19 09:49:40.210049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.502 [2024-11-19 09:49:40.210070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.502 qpair failed and we were unable to recover it. 00:31:53.502 [2024-11-19 09:49:40.210294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.502 [2024-11-19 09:49:40.210316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.502 qpair failed and we were unable to recover it. 00:31:53.502 [2024-11-19 09:49:40.210656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.502 [2024-11-19 09:49:40.210677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.502 qpair failed and we were unable to recover it. 00:31:53.502 [2024-11-19 09:49:40.210973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.502 [2024-11-19 09:49:40.210994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.502 qpair failed and we were unable to recover it. 00:31:53.502 [2024-11-19 09:49:40.211341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.502 [2024-11-19 09:49:40.211363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.502 qpair failed and we were unable to recover it. 00:31:53.502 [2024-11-19 09:49:40.211717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.502 [2024-11-19 09:49:40.211739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.502 qpair failed and we were unable to recover it. 00:31:53.775 [2024-11-19 09:49:40.212085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.212110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.212377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.212402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.212762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.212783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.213032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.213054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.213403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.213428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.213791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.213813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.214166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.214190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.214516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.214538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.214854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.214882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.215184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.215208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.215574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.215597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.215752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.215773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.215998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.216022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.216383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.216406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.216752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.216775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.217140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.217171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.217416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.217438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.217657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.217680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.217886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.217910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.218143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.218174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.218496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.218519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.218899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.218922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.219302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.219328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.219613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.219635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.219856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.219891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.220265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.220295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.220563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.220592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.220974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.221003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.221360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.221394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.221746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.221777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.222143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.222182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.222506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.222535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.222663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.222693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.223036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.223065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.223419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.223452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.223703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.223733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.224084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.224115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.776 [2024-11-19 09:49:40.224501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.776 [2024-11-19 09:49:40.224532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.776 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.224898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.224936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.225272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.225303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.225585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.225624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.225974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.226013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.226290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.226326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.226694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.226732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.227111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.227140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.227438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.227468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.227704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.227738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.227979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.228009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.228378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.228412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.228753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.228785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.229009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.229037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.229354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.229386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.229608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.229637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.229883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.229912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.230180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.230212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.230571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.230600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.230963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.230993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.231361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.231393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.231777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.231806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.232182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.232213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.232586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.232617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.232964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.232996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.233395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.233426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.233763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.233794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.234204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.234238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.234592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.234623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.234982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.235012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.235384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.235415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.235738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.235768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.236017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.236047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.236395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.236427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.236646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.236676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.237039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.237068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.237303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.777 [2024-11-19 09:49:40.237332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.777 qpair failed and we were unable to recover it. 00:31:53.777 [2024-11-19 09:49:40.237619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.237652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.237874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.237903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.238268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.238308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.238681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.238713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.238986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.239014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.239358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.239390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.239598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.239629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.239983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.240013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.240348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.240381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.240777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.240807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.241173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.241203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.241592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.241622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.241988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.242018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.242389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.242421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.242659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.242689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.242943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.242972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.243336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.243369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.243743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.243772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.244045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.244073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.244324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.244357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.244599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.244633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.245021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.245050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.245418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.245450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.245675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.245704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.246063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.246103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.246454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.246493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.246856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.246887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.247238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.247267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.247617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.247646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.248001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.248030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.248259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.248291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.248678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.248707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.249054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.249083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.249342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.249372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.249726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.249755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.250090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.250118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.250362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.250393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.250758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.778 [2024-11-19 09:49:40.250787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.778 qpair failed and we were unable to recover it. 00:31:53.778 [2024-11-19 09:49:40.251149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.251211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.251609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.251640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.251903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.251932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.252216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.252248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.252644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.252672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.253049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.253086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.253450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.253482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.253843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.253876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.254090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.254118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.254545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.254579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.254941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.254977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.255360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.255390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.255763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.255792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.256038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.256067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.256398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.256429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.256796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.256826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.257040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.257069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.257506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.257538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.257907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.257936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.258300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.258336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.258577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.258608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.258947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.258976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.259322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.259352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.259715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.259747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.260096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.260130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.260485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.260519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.260872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.260903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.261012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.261040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.261410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.261441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.261756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.261786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.262041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.262070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.262328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.262358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.262720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.262758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.263089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.263124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.263348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.263379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.263717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.263748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.264116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.264146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.264395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.779 [2024-11-19 09:49:40.264423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.779 qpair failed and we were unable to recover it. 00:31:53.779 [2024-11-19 09:49:40.264761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.264792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.265172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.265203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.265578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.265607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.265960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.265989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.266327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.266357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.266719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.266748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.267091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.267123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.267374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.267404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.267781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.267813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.268197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.268229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.268450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.268479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.268730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.268759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.268991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.269019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.269369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.269402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.269757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.269792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.270139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.270179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.270544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.270574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.270777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.270806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.271174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.271204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.271546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.271574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.271950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.271980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.272323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.272353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.272686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.272717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.272926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.272954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.273195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.273227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.273604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.273633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.273801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.273829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.274193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.274225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.274594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.274623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.274980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.275009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.275238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.275268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.275611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.780 [2024-11-19 09:49:40.275639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.780 qpair failed and we were unable to recover it. 00:31:53.780 [2024-11-19 09:49:40.275993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.276022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.276282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.276313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.276672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.276702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.277036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.277080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.277447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.277479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.277806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.277835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.278218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.278249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.278613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.278643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.278959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.278989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.279354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.279385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.279612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.279641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.280000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.280029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.280136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.280173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.280380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.280410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.280771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.280799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.281022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.281055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.281321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.281353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.281711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.281741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.281944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.281974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.282197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.282226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.282492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.282521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.282871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.282900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.283218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.283247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.283451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.283481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.283861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.283891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.284273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.284303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.284673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.284701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.285062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.285092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.285460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.285490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.285702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.285731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.285960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.285995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.286333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.286365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.286743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.286771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.286996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.287025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.287246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.287275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.287652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.287682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.287901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.287930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.288152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.288190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.781 qpair failed and we were unable to recover it. 00:31:53.781 [2024-11-19 09:49:40.288569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.781 [2024-11-19 09:49:40.288598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.288919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.288958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.289326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.289358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.289693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.289723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.289973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.290002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.290251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.290282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.290641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.290670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.290796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.290829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.291074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.291104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.291492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.291522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.291886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.291915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.292279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.292309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.292549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.292583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.292941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.292970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.293087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.293115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.293496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.293527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.293893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.293922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.294173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.294203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.294548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.294577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.294938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.294974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.295199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.295229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.295438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.295468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.295583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.295616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.295865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.295893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.296221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.296252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.296369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.296396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.296680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.296708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.297070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.297100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.297346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.297376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.297689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.297719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.297993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.298022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.298236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.298267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.298630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.298659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.299049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.299086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.299449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.299479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.299826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.299856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.300238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.300267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.300621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.300649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.782 [2024-11-19 09:49:40.300861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.782 [2024-11-19 09:49:40.300892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.782 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.301226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.301256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.301355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.301384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.301751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.301781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.302116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.302145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.302524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.302554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.302914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.302944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.303323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.303354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.303703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.303734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.303995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.304027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.304278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.304308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.304677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.304705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.305078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.305107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.305476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.305518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.305853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.305882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.306108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.306138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.306377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.306407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.306779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.306808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.307180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.307211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.307576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.307605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.307975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.308007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.308347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.308378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.308750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.308787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.309005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.309035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.309408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.309437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.309649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.309678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.310017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.310047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.310267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.310299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.310551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.310579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.310949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.310977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.311225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.311255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.311513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.311544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.311857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.311888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.312114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.312142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.312526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.312557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.312660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.783 [2024-11-19 09:49:40.312688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:53.783 qpair failed and we were unable to recover it. 00:31:53.783 [2024-11-19 09:49:40.312964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb38e00 is same with the state(6) to be set 00:31:53.783 Read completed with error (sct=0, sc=8) 00:31:53.783 starting I/O failed 00:31:53.783 Read completed with error (sct=0, sc=8) 00:31:53.783 starting I/O failed 00:31:53.783 Read completed with error (sct=0, sc=8) 00:31:53.783 starting I/O failed 00:31:53.783 Read completed with error (sct=0, sc=8) 00:31:53.783 starting I/O failed 00:31:53.783 Read completed with error (sct=0, sc=8) 00:31:53.783 starting I/O failed 00:31:53.783 Read completed with error (sct=0, sc=8) 00:31:53.783 starting I/O failed 00:31:53.783 Read completed with error (sct=0, sc=8) 00:31:53.783 starting I/O failed 00:31:53.783 Write completed with error (sct=0, sc=8) 00:31:53.783 starting I/O failed 00:31:53.784 Write completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Write completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Write completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Write completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Write completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Write completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Write completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Read completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Read completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Read completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Read completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Write completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Write completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Write completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Write completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Write completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Write completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Write completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Read completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Write completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Read completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Read completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Read completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 Write completed with error (sct=0, sc=8) 00:31:53.784 starting I/O failed 00:31:53.784 [2024-11-19 09:49:40.314017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:53.784 [2024-11-19 09:49:40.314552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.314660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.314981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.315024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.315529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.315635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.316026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.316065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.316463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.316499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.316885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.316916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.317435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.317540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.317840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.317878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.318250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.318283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.318665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.318696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.318918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.318950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.319226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.319258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.319649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.319680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.320030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.320060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.320419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.320451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.320680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.320711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.320840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.320870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.321020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.321050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.321485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.321516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.321832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.321877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.322204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.322240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.322365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.322398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.322608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.322637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.322837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.322869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.323299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.323330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.323547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.323578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.323931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.323962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.324199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.324232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.324649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.784 [2024-11-19 09:49:40.324681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.784 qpair failed and we were unable to recover it. 00:31:53.784 [2024-11-19 09:49:40.325036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.325066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.325252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.325283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.325690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.325720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.326080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.326112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.326469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.326501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.326884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.326914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.327288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.327318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.327698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.327726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.328090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.328132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.328237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.328265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.328651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.328681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.329056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.329084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.329456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.329487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.329694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.329723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.330077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.330106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.330499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.330530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.330736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.330764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.331140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.331183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.331537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.331567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.331882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.331911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.332156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.332197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.332553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.332582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.332940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.332969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.333326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.333358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.333702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.333732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.333951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.333980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.334338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.334368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.334692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.334721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.335063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.335093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.335467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.335498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.335733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.335782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.336151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.336190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.336532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.785 [2024-11-19 09:49:40.336562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.785 qpair failed and we were unable to recover it. 00:31:53.785 [2024-11-19 09:49:40.336902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.336932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.337143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.337183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.337532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.337561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.337912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.337944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.338306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.338337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.338663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.338690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.339050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.339079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.339417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.339449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.339830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.339859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.340195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.340224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.340582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.340611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.341017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.341046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.341400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.341430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.341796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.341828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.342182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.342215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.342598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.342626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.342988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.343019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.343264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.343294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.343574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.343608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.343944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.343973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.344344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.344377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.344769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.344797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.345169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.345199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.345557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.345586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.345911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.345943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.346172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.346201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.346404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.346432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.346753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.346782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.347154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.347195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.347577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.347605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.347971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.348000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.348216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.348246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.348607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.348637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.348953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.348990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.349331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.349361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.349719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.349748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.350122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.350151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.786 [2024-11-19 09:49:40.350499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.786 [2024-11-19 09:49:40.350534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.786 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.350853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.350884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.351263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.351294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.351504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.351533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.351731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.351759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.352117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.352145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.352535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.352565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.352944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.352973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.353330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.353362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.353604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.353634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.353976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.354005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.354331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.354362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.354696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.354725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.354975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.355005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.355387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.355419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.355771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.355799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.356171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.356201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.356561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.356599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.356942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.356980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.357244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.357276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.357500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.357529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.357879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.357908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.358272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.358302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.358659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.358688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.359039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.359069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.359287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.359317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.359687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.359728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.359941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.359971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.360326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.360356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.360722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.360750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.361100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.361128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.361507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.361537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.361869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.361899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.362131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.362182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.362526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.362556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.362904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.362941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.363278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.787 [2024-11-19 09:49:40.363308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.787 qpair failed and we were unable to recover it. 00:31:53.787 [2024-11-19 09:49:40.363633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.363670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.364031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.364061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.364425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.364454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.364778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.364813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.365166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.365197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.365526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.365554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.365754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.365781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.366139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.366176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.366430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.366459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.366815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.366843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.367043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.367071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.367431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.367462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.367712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.367740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.368147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.368194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.368559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.368589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.368936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.368966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.369214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.369244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.369588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.369619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.369966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.369995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.370227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.370256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.370616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.370646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.371000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.371029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.371411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.371441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.371758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.371790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.372129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.372165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.372520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.372549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.372910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.372940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.373303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.373333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.373700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.373728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.374052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.374094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.374497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.374530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.374879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.374908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.375142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.375181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.375540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.375571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.375964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.375992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.376226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.376255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.376511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.376540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.788 [2024-11-19 09:49:40.376755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.788 [2024-11-19 09:49:40.376784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.788 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.377153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.377191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.377435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.377464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.377829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.377868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.378219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.378258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.378542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.378571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.378941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.378977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.379338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.379368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.379686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.379714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.380026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.380055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.380400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.380430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.380772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.380801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.381008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.381037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.381388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.381419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.381634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.381662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.382016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.382046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.382275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.382306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.382665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.382694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.383065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.383094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.383466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.383496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.383865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.383895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.384117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.384145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.384366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.384397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.384764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.384793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.385138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.385189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.385451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.385480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.385690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.385719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.386034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.386063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.386423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.386454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.386795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.386825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.387086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.387117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.387464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.387495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.387842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.387872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.388225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.789 [2024-11-19 09:49:40.388257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.789 qpair failed and we were unable to recover it. 00:31:53.789 [2024-11-19 09:49:40.388625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.388656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.389006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.389038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.389265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.389294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.389631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.389671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.389996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.390027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.390246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.390275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.390657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.390687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.391046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.391076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.391453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.391484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.391832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.391875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.392103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.392134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.392507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.392538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.392632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.392667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.393047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.393076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.393422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.393452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.393573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.393607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.393957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.393988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.394341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.394372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.394753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.394781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.395012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.395040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.395261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.395292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.395527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.395558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.395810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.395840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.396213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.396243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.396455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.396484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.396715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.396743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.396990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.397021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.397232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.397264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.397505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.397533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.397883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.397912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.398229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.398261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.398615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.398644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.399020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.399049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.399401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.399434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.399795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.399826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.400086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.400115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.400378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.400409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.790 qpair failed and we were unable to recover it. 00:31:53.790 [2024-11-19 09:49:40.400680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.790 [2024-11-19 09:49:40.400709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.401072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.401100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.401337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.401367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.401677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.401707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.402071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.402099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.402256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.402286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.402683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.402715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.402812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.402839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.403182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.403213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.403576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.403606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.403830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.403857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.404135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.404173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.404423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.404451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.404809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.404840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.405191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.405221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.405548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.405589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.405935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.405971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.406300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.406329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.406574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.406603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.406965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.406995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.407219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.407250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.407598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.407632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.407998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.408027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.408293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.408327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.408660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.408698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.409037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.409066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.409430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.409462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.409809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.409839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.410172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.410202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.410566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.410602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.410940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.410971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.411318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.411347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.411707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.411737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.412088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.412117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.412508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.412538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.412885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.412925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.413134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.413173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.413377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.791 [2024-11-19 09:49:40.413407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.791 qpair failed and we were unable to recover it. 00:31:53.791 [2024-11-19 09:49:40.413619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.413648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.414044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.414074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.414287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.414318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.414657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.414687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.414998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.415033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.415385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.415416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.415640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.415669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.416018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.416047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.416359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.416390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.416754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.416783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.417034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.417062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.417280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.417311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.417653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.417683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.418039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.418070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.418318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.418352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.418717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.418748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.419105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.419135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.419382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.419412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.419751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.419784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.420151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.420187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.420599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.420628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.420989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.421020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.421118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.421146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.421376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.421408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.421764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.421795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.422153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.422204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.422411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.422440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.422800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.422838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.423182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.423216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.423539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.423567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.423936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.423966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.424204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.424235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.424557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.424587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.424964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.424993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.425350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.425382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.792 [2024-11-19 09:49:40.425642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.792 [2024-11-19 09:49:40.425670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.792 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.425987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.426017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.426337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.426366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.426718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.426747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.427066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.427106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.427493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.427525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.427888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.427917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.428284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.428317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.428542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.428573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.428921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.428957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.429294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.429325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.429644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.429681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.429911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.429939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.430259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.430291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.430697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.430727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.430946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.430975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.431320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.431349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.431708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.431738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.432098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.432128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.432579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.432610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.432980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.433020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.433235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.433266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.433599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.433628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.433997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.434028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.434394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.434425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.434695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.434726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.435047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.435078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.435297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.435326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.435643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.435673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.435986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.436027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.436381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.436411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.436730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.436761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.437112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.437141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.437481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.437511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.437830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.437859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.438217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.438250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.438590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.438620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.438971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.439001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.793 qpair failed and we were unable to recover it. 00:31:53.793 [2024-11-19 09:49:40.439323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.793 [2024-11-19 09:49:40.439354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.439684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.439714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.439935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.439966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.440323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.440353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.440690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.440729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.441075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.441104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.441475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.441506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.441742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.441772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.441917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.441946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.442194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.442228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.442328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.442356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.442714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.442750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.443147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.443199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.443527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.443557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.443902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.443931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.444285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.444327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.444667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.444697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.445018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.445048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.445368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.445399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.445737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.445767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.446093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.446135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.446358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.446388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.446725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.446756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.446970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.446998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.447330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.447361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.447729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.447763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.448153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.448191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.448563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.448604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.448822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.448852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.449212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.449244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.449477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.449507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.449864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.794 [2024-11-19 09:49:40.449895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.794 qpair failed and we were unable to recover it. 00:31:53.794 [2024-11-19 09:49:40.450236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.450268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.450644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.450674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.451037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.451068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.451423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.451454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.451792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.451830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.452178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.452210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.452466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.452497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.452613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.452641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.453016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.453045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.453422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.453463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.453790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.453822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.454187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.454220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.454637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.454668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.455016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.455046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.455404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.455435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.455774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.455809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.456208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.456241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.456581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.456619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.456987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.457016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.457111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.457147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1408000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 Read completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Read completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Read completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Read completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Read completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Read completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Write completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Write completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Read completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Write completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Write completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Write completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Read completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Write completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Read completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Write completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Write completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Read completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Write completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Write completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Write completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Read completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Read completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Write completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Read completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Read completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Write completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Write completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Write completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Write completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Write completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 Read completed with error (sct=0, sc=8) 00:31:53.795 starting I/O failed 00:31:53.795 [2024-11-19 09:49:40.457946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:53.795 [2024-11-19 09:49:40.458490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.458610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.459055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.459096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.459362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.459397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.459761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.459792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.460026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.460058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.460288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.460321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.460508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.795 [2024-11-19 09:49:40.460540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.795 qpair failed and we were unable to recover it. 00:31:53.795 [2024-11-19 09:49:40.460918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.460950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.461301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.461334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.461701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.461732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.462104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.462134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.462319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.462351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.462598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.462629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.462839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.462871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.463225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.463258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.463646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.463676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.463909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.463940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.464370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.464402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.464611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.464641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.464866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.464897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.465153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.465197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.465564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.465593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.465959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.465990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.466358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.466393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.466752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.466781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.467025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.467055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.467308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.467339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.467698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.467727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.467941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.467974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.468313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.468346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.468692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.468722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.469088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.469119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.469485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.469524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.469736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.469765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.470115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.470147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.470528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.470559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.470914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.470944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.471312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.471345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.471563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.471592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.471819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.471849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.472104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.472136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.472475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.472506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.472872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.472902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.473113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.473144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.796 qpair failed and we were unable to recover it. 00:31:53.796 [2024-11-19 09:49:40.473511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.796 [2024-11-19 09:49:40.473541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.473913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.473943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.474322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.474353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.474745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.474775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.475171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.475201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.475410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.475441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.475787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.475816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.476192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.476223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.476479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.476509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.476868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.476897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.477262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.477292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.477639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.477667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.478022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.478051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.478279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.478309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.478676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.478704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.478937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.478968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.479197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.479229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.479545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.479576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.479691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.479719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.479956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.479987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.480229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.480261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.480625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.480653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.481013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.481043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.481356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.481387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.481779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.481809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.482035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.482064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.482313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.482342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.482686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.482714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.483079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.483115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.483357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.483387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.483756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.483791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.484171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.484202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.484455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.484486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.484848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.484877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.485236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.485267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.485660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.485690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.486040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.486071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.486413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.797 [2024-11-19 09:49:40.486444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.797 qpair failed and we were unable to recover it. 00:31:53.797 [2024-11-19 09:49:40.486654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.486683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.486994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.487023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.487244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.487275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.487650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.487679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.488007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.488041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.488337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.488368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.488726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.488756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.489000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.489029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.489361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.489392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.489732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.489772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.489985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.490016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.490275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.490309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.490675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.490709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.491060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.491089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.491493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.491524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.491876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.491905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.492023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.492053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.492397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.492428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.492743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.492774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.493014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.493042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.493319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.493350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.493744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.493774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.494149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.494198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.494436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.494468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.494684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.494716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.494968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.494998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.495334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.495365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.495741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.495770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.496001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.496030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.496350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.496380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.496693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.496729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.497077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.497108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.497338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.497369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.497579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.497607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.798 [2024-11-19 09:49:40.497971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.798 [2024-11-19 09:49:40.498001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.798 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.498378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.498411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.498634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.498663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.499004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.499045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.499414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.499444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.499763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.499793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.500155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.500195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.500589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.500618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.500983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.501013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.501401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.501432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.501705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.501733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.502102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.502134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.502400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.502431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.502671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.502701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.503081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.503112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.503497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.503528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.503892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.503922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.504300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.504331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.504716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.504747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.505013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.505041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.505393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.505424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.505751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.505780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.506143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.506192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.506558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.506588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.506962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.506991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.507332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.507362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.507464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.507491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.507635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.507676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:53.799 [2024-11-19 09:49:40.507929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.799 [2024-11-19 09:49:40.507957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:53.799 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.508208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.508242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.508472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.508502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.508775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.508805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.509203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.509234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.509552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.509584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.509809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.509839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.510072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.510101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.510499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.510547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.510935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.510964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.511227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.511258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.511641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.511670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.512055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.512085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.512361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.512391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.512637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.512665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.512767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.512795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.513009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.513041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.513404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.513434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.513804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.513834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.513977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.514005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.514249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.514279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.514707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.514738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.514956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.514985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.515321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.515352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.515718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.515759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.516117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.516152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.516507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.075 [2024-11-19 09:49:40.516537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.075 qpair failed and we were unable to recover it. 00:31:54.075 [2024-11-19 09:49:40.516902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.516932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.517304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.517335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.517697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.517726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.518046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.518074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.518433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.518464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.518835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.518864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.519227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.519258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.519653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.519683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.520046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.520076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.520431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.520461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.520694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.520722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.521077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.521106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.521382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.521413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.521791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.521821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.522205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.522238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.522628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.522657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.523029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.523059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.523283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.523313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.523658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.523687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.524046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.524077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.524457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.524487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.524865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.524901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.525261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.525291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.525660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.525689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.525923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.525951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.526172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.526205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.526550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.526578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.526744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.526777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.527142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.527183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.527538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.527567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.527885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.527915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.528145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.528185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.528531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.528561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.528658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.528686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.529017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.529049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.529339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.529370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.529581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.529613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.076 [2024-11-19 09:49:40.529959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.076 [2024-11-19 09:49:40.529990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.076 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.530287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.530318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.530579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.530608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.530857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.530887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.531232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.531262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.531489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.531523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.531899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.531929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.532193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.532224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.532445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.532475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.532859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.532888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.533278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.533309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.533685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.533716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.534064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.534095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.534339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.534369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.534600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.534629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.534894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.534924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.535149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.535204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.535570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.535601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.535967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.535998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.536364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.536396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.536770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.536799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.537126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.537170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.537517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.537563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.537787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.537816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.538038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.538077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.538471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.538505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.538855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.538885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.539206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.539237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.539357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.539386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.539774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.539803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.540058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.540088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.540287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.540317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.540657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.540688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.540913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.540942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.541318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.541349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.541555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.541583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.541867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.541901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.077 [2024-11-19 09:49:40.542148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.077 [2024-11-19 09:49:40.542188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.077 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.542597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.542626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.542994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.543024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.543370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.543402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.543767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.543797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.544021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.544050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.544348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.544379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.544592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.544621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.544995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.545025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.545241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.545272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.545633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.545662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.545986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.546014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.546332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.546363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.546597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.546627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.547011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.547048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.547185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.547222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.547471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.547501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.547866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.547895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.548240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.548270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.548568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.548597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.548974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.549003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.549349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.549379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.549613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.549642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.549998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.550029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.550329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.550360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.550734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.550764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.551088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.551128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.551401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.551431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.551754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.551785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.552155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.552198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.552564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.552594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.552913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.552942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.553255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.553286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.553492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.553520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.553911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.553940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.554264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.554296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.554642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.554672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.554779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.554807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.078 qpair failed and we were unable to recover it. 00:31:54.078 [2024-11-19 09:49:40.555202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.078 [2024-11-19 09:49:40.555233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.555597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.555626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.555988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.556017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.556265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.556296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.556678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.556709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.557075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.557106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.557454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.557484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.557723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.557752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.558125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.558154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.558502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.558533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.558774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.558803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.559017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.559045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.559369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.559402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.559757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.559786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.560181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.560212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.560583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.560612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.560979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.561014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.561381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.561456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.561807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.561837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.562179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.562210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.562456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.562486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.562824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.562853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.563237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.563269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.563649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.563678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.564058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.564088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.564456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.564487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.564702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.564733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.564862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.564895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.565306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.565336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.565658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.565686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.566008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.566039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.079 [2024-11-19 09:49:40.566412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.079 [2024-11-19 09:49:40.566443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.079 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.566696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.566725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.567088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.567116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.567343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.567377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.567590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.567618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.567836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.567865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.568256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.568287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.568664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.568693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.568911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.568939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.569314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.569345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.569711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.569740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.570096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.570124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.570375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.570405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.570662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.570706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.570921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.570950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.571312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.571344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.571697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.571726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.572105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.572134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.572376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.572405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.572606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.572638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.572991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.573020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.573393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.573426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.573753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.573781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.574179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.574209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.574583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.574615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.574958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.574993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.575387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.575418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.575639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.575669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.575909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.575937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.576314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.576344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.576691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.576721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.576950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.576979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.577299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.577330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.577534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.577563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.577929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.577959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.578179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.578210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.578669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.578699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.579052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.080 [2024-11-19 09:49:40.579079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.080 qpair failed and we were unable to recover it. 00:31:54.080 [2024-11-19 09:49:40.579432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.579463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.579852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.579882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.580097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.580127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.580511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.580540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.580898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.580926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.581195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.581225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.581320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.581348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.581712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.581742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.582080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.582109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.582336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.582367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.582683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.582712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.583024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.583053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.583412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.583444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.583806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.583834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.584042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.584071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.584314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.584351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.584687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.584719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.585087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.585116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.585484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.585517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.585883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.585917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.586171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.586200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.586530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.586560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.586773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.586803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.587080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.587111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.587358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.587389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.587678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.587707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.588055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.588084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.588468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.588506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.588722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.588751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.589126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.589156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.589410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.589440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.589693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.589722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.589983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.590012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.590377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.590408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.590796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.590825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.591013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.591045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.591397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.591430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.081 [2024-11-19 09:49:40.591777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.081 [2024-11-19 09:49:40.591807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.081 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.592171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.592202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.592531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.592561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.592926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.592955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.593327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.593358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.593691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.593724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.593974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.594003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.594343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.594374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.594742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.594770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.595132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.595168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.595411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.595439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.595645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.595676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.596029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.596059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.596441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.596473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.596682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.596711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.597062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.597090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.597477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.597510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.597757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.597785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.598146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.598185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.598547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.598578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.598937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.598965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.599339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.599373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.599717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.599755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.599963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.599993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.600253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.600286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.600696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.600726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.601072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.601104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.601403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.601432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.601654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.601684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.601891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.601920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.602153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.602199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.602535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.602566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.602935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.602964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.603297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.603329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.603692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.603722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.604085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.604114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.604500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.604530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.604852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.604880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.082 qpair failed and we were unable to recover it. 00:31:54.082 [2024-11-19 09:49:40.605254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.082 [2024-11-19 09:49:40.605286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.605536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.605568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.605916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.605944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.606274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.606303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.606730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.606760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.607199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.607232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.607488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.607518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.607837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.607869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.608104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.608133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.608539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.608570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.608938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.608967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.609321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.609351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.609567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.609596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.609919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.609948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.610316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.610345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.610556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.610585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.610987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.611016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.611354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.611384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.611754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.611784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.612154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.612191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.612426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.612456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.612807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.612836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.613193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.613224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.613611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.613641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.613997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.614025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.614385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.614417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.614784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.614813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.615185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.615216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.615562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.615590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.615801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.615831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.616210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.616240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.616599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.616628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.616829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.616864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.617179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.617211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.617576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.617604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.617965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.617995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.618331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.618361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.618733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.083 [2024-11-19 09:49:40.618763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.083 qpair failed and we were unable to recover it. 00:31:54.083 [2024-11-19 09:49:40.619010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.619040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.619422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.619453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.619812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.619841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.620230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.620260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.620619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.620657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.620991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.621020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.621377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.621409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.621756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.621786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.622008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.622037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.622358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.622388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.622601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.622630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.622984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.623012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.623389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.623420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.623736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.623776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.624127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.624174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.624521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.624550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.624844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.624872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.625195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.625225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.625580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.625611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.625959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.625989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.626216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.626247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.626618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.626647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.627062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.627091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.627425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.627464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.627825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.627854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.628223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.628253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.628564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.628593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.628957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.628985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.629346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.629375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.629631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.629661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.630023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.630053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.630282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.630311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.084 [2024-11-19 09:49:40.630628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.084 [2024-11-19 09:49:40.630659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.084 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.630901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.630929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.631151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.631198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.631558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.631587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.631961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.631991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.632352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.632382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.632630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.632660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.632867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.632897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.633268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.633300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.633651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.633680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.634054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.634084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.634466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.634496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.634871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.634900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.635155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.635195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.635557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.635586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.635948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.635976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.636337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.636369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.636579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.636607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.636964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.636992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.637314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.637344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.637733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.637764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.638114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.638143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.638519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.638549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.638870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.638899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.639116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.639145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.639510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.639540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.639840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.639869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.640087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.640117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.640345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.640375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.640760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.640789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.641060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.641091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.641420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.641452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.641663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.641694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.642066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.642095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.642469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.642500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.642728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.642756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.643002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.643031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.643386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.643417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.085 qpair failed and we were unable to recover it. 00:31:54.085 [2024-11-19 09:49:40.643791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.085 [2024-11-19 09:49:40.643821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.644082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.644111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.644436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.644468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.644702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.644732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.645102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.645139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.645496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.645527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.645746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.645775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.646027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.646057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.646383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.646416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.646770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.646800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.646995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.647025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.647396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.647428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.647675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.647705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.647963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.647995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.648328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.648360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.648630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.648659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.648980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.649010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.649232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.649264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.649538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.649568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.650011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.650040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.650220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.650250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.650498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.650527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.650885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.650915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.651256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.651287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.651494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.651524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.651757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.651784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.652029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.652060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.652423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.652457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.652803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.652833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.653212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.653244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.653599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.653629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.653804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.653833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.654061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.654091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.654471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.654502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.654755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.654783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.655019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.655052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.655457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.655489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.655704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.086 [2024-11-19 09:49:40.655733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.086 qpair failed and we were unable to recover it. 00:31:54.086 [2024-11-19 09:49:40.656083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.656113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.656365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.656396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.656766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.656794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.656894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.656933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.657307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.657339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.657705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.657735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.657950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.657985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.658396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.658428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.658810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.658841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.659178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.659208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.659466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.659499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.659809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.659839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.660209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.660240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.660472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.660503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.660882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.660912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.661284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.661314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.661577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.661606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.661967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.661998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.662338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.662368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.662727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.662757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.662904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.662933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.663329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.663359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.663577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.663605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.663813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.663842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.664200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.664231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.664613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.664643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.665020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.665050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.665369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.665400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.665790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.665819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.666194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.666226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.666435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.666464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.666780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.666811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.667175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.667205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.667564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.667593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.667957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.667995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.668219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.668249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.668624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.668654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.087 [2024-11-19 09:49:40.669015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.087 [2024-11-19 09:49:40.669046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.087 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.669292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.669324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.669671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.669707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.670071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.670099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.670506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.670538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.670899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.670928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.671174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.671204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.671424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.671453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.671810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.671840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.672202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.672239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.672602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.672630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.672840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.672869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.673231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.673261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.673608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.673638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.673843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.673872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.674231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.674262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.674499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.674527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.674789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.674818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.675042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.675075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.675236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.675267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.675668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.675696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.676073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.676103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.676457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.676488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.676844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.676874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.677000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.677029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.677402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.677433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.677780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.677809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.678201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.678232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.678593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.678623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.678846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.678874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.679238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.679268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.679646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.679675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.679883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.679911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.680155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.680195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.680458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.680487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.680838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.680866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.681221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.681252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.681607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.088 [2024-11-19 09:49:40.681636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.088 qpair failed and we were unable to recover it. 00:31:54.088 [2024-11-19 09:49:40.682007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.682035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.682317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.682347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.682716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.682745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.682963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.682991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.683362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.683392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.683516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.683553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.683901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.683929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.684276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.684305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.684660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.684689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.685054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.685083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.685451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.685482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.685837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.685873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.686118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.686146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.686498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.686528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.686888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.686918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.687299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.687329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.687678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.687709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.687928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.687957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.688180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.688209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.688522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.688551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.688880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.688911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.689292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.689322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.689680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.689709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.689974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.690003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.690333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.690363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.690744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.690773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.691142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.691181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.691591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.691620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.691720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.691748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f140c000b90 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.692055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.692151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.692749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.692855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.693436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.693545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.694010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.694048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.089 [2024-11-19 09:49:40.694425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.089 [2024-11-19 09:49:40.694532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.089 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.694818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.694854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.695242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.695276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.695659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.695689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.696043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.696074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.696311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.696358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.696629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.696658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.697040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.697071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.697370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.697406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.697737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.697768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.698115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.698146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.698523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.698553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.698921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.698951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.699323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.699356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.699722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.699751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.700113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.700142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.700523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.700553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.700913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.700941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.701277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.701309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.701550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.701580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.701821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.701854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.702090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.702120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.702481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.702512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.702843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.702871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.703239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.703270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.703661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.703693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.704060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.704090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.704398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.704428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.704823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.704852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.705225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.705256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.705352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.705380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.705707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.705736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.705970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.706000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.706249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.706280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.706544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.706577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.706822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.706851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.707178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.707208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.707589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.707619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.708033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.090 [2024-11-19 09:49:40.708063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.090 qpair failed and we were unable to recover it. 00:31:54.090 [2024-11-19 09:49:40.708324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.708355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.708586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.708615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.708951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.708982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.709228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.709258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.709543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.709572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.709898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.709926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.710148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.710186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.710623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.710654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.710915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.710947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.711156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.711196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.711571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.711599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.711968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.711998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.712329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.712360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.712709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.712739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.712951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.712979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.713307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.713338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.713621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.713650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.713872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.713903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.714230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.714261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.714652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.714682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.715050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.715081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.715404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.715436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.715861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.715891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.716142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.716194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.716419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.716447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.716811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.716840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.717072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.717101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.717486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.717517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.717871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.717900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.718227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.718258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.718542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.718571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.718953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.718982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.719203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.719233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.719603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.719632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.719970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.720004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.720374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.720405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.720736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.720764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.721120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.091 [2024-11-19 09:49:40.721151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.091 qpair failed and we were unable to recover it. 00:31:54.091 [2024-11-19 09:49:40.721389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.721419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.721786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.721816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.722185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.722215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.722591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.722620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.722839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.722868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.722994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.723022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.723375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.723405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.723838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.723869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.724214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.724244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.724628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.724658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.725034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.725063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.725288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.725317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.725425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.725453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.725785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.725816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.726184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.726216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.726467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.726497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.726709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.726739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.727115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.727144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.727426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.727455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.727705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.727733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.728058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.728090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.728473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.728503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.728840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.728878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.729218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.729249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.729640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.729670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.729931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.729964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.730324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.730356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.730708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.730738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.731102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.731130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.731470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.731502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.731693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.731720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.732107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.732134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.732501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.732531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.732777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.732807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.733179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.733210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.733564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.733593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.733961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.733989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.734358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.092 [2024-11-19 09:49:40.734389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.092 qpair failed and we were unable to recover it. 00:31:54.092 [2024-11-19 09:49:40.734725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.734753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.735120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.735150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.735513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.735542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.735883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.735911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.736270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.736300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.736651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.736679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.737059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.737089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.737460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.737493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.737744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.737773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.738013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.738044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.738290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.738320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.738682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.738711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.738933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.738960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.739314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.739346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.739704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.739734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.740001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.740030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.740375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.740404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.740637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.740666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.740998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.741026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.741400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.741431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.741681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.741709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.741940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.741967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.742363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.742395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.742767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.742796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.743167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.743196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.743416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.743445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.743820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.743855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.744245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.744275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.744668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.744697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.745069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.745097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.745411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.745441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.745810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.745839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.746218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.746250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.093 qpair failed and we were unable to recover it. 00:31:54.093 [2024-11-19 09:49:40.746454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.093 [2024-11-19 09:49:40.746482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.746843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.746871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.747226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.747256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.747458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.747486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.747734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.747762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.748121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.748152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.748519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.748549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.748917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.748947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.749318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.749347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.749661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.749689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.749998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.750029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.750391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.750422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.750794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.750824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.751090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.751120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.751519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.751548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.751807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.751836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.752053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.752083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.752424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.752454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.752688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.752717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.752942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.752973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.753102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.753131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.753550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.753582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.753950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.753979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.754377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.754408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.754748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.754776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.754996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.755026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.755388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.755420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.755786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.755815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.755979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.756010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.756270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.756300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.756551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.756586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.756919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.756948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.757194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.757224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.757487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.757518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.757964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.758001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.758378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.094 [2024-11-19 09:49:40.758408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.094 qpair failed and we were unable to recover it. 00:31:54.094 [2024-11-19 09:49:40.758721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.758750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.759180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.759210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.759555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.759583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.759860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.759889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.760247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.760277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.760641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.760670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.761039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.761067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.761289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.761319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.761573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.761601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.761823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.761850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.762100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.762129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.762380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.762411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.762771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.762800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.763145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.763184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.763446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.763475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.763821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.763848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.764182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.764214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.764581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.764612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.764856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.764886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.765266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.765297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.765649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.765678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.765883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.765913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.766267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.766300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.766543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.766572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.766943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.766974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.767356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.767393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.767758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.767789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.768140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.768178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.768552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.768583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.768839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.768868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.769094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.769123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.769374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.769404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.769768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.769799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.770206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.770238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.770611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.770640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.771013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.771042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.771150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.771190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.771561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.095 [2024-11-19 09:49:40.771692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.095 qpair failed and we were unable to recover it. 00:31:54.095 [2024-11-19 09:49:40.772020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.772065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.772369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.772415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.772852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.772891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.773153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.773208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.773590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.773629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.773999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.774038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.774459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.774500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.774766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.774806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.775071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.775109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.775512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.775552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.775827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.775867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.776236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.776278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.776554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.776593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.776966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.777005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.777398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.777447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.777693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.777733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.778183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.778224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.778617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.778657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.778923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.778964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.779356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.779397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.779769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.779809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.780222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.780262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.780653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.780691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.781114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.781153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.781556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.781596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.781998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.782036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.782355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.782396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.782655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.782693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.783061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.783101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.783521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.783561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.783925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.783966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.784386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.784427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.784788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.784826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.785220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.785262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.785615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.785654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.786086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.786127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.786531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.786571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.786964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.096 [2024-11-19 09:49:40.787002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.096 qpair failed and we were unable to recover it. 00:31:54.096 [2024-11-19 09:49:40.787386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.787426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.787822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.787864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.788228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.788269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.788678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.788718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.789140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.789201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.789603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.789642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.790056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.790095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.790354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.790395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.790757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.790796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.791211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.791250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.791670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.791709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.792108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.792148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.792456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.792495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.792851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.792891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.793128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.793181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.793606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.793645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.794038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.794086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.794517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.794558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.794820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.794858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.795220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.795260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.795640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.795678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.795962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.796001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.796262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.796302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.796703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.796743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.797004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.797042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.797359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.797399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.797804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.797843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.798101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.798139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.798550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.798590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.798841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.798879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.799171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.799212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.799589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.799628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.800030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.800069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.800481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.800522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.800885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.800924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.801092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.801144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.801557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.801596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.097 [2024-11-19 09:49:40.802030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.097 [2024-11-19 09:49:40.802068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.097 qpair failed and we were unable to recover it. 00:31:54.098 [2024-11-19 09:49:40.802357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.098 [2024-11-19 09:49:40.802397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.098 qpair failed and we were unable to recover it. 00:31:54.098 [2024-11-19 09:49:40.802754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.098 [2024-11-19 09:49:40.802793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.098 qpair failed and we were unable to recover it. 00:31:54.098 [2024-11-19 09:49:40.803172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.098 [2024-11-19 09:49:40.803219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.098 qpair failed and we were unable to recover it. 00:31:54.098 [2024-11-19 09:49:40.803502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.098 [2024-11-19 09:49:40.803542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.098 qpair failed and we were unable to recover it. 00:31:54.098 [2024-11-19 09:49:40.803908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.098 [2024-11-19 09:49:40.803948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.098 qpair failed and we were unable to recover it. 00:31:54.098 [2024-11-19 09:49:40.804328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.098 [2024-11-19 09:49:40.804369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.098 qpair failed and we were unable to recover it. 00:31:54.098 [2024-11-19 09:49:40.804734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.098 [2024-11-19 09:49:40.804772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.098 qpair failed and we were unable to recover it. 00:31:54.098 [2024-11-19 09:49:40.805016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.098 [2024-11-19 09:49:40.805054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.098 qpair failed and we were unable to recover it. 00:31:54.098 [2024-11-19 09:49:40.805426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.098 [2024-11-19 09:49:40.805467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.098 qpair failed and we were unable to recover it. 00:31:54.098 [2024-11-19 09:49:40.805848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.098 [2024-11-19 09:49:40.805889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.098 qpair failed and we were unable to recover it. 00:31:54.369 [2024-11-19 09:49:40.806286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.369 [2024-11-19 09:49:40.806328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.369 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.806708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.806747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.806983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.807021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.807419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.807458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.807819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.807857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.808139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.808190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.808570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.808608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.808868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.808907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.809170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.809228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.809495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.809533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.809784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.809825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.810186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.810225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.810433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.810472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.810874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.810913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.811196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.811238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.811690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.811728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.811988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.812027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.812223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.812273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.812653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.812694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.812940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.812979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.813380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.813422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.813702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.813743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.814141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.814193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.814544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.814583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.815003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.815041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.815446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.815486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.815852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.815890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.816255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.816297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.816692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.816731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.817111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.817151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.817426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.817466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.817739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.817777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.818020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.818059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.818384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.818426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.818677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.818715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.819132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.819190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.819583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.819623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.370 [2024-11-19 09:49:40.819988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.370 [2024-11-19 09:49:40.820027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.370 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.820414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.820455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.820740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.820778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.821139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.821189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.821553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.821592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.821852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.821891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.822139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.822188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.822586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.822626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.822896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.822934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.823319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.823359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.823719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.823758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.824116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.824177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.824571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.824611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.824830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.824870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.825266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.825307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.825670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.825708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.825986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.826024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.826391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.826432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.826677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.826715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.827085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.827122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.827530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.827571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.827934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.827973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.828374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.828413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.828774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.828813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.828974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.829026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.829428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.829470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.829831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.829870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.830229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.830270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.830685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.830723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.831081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.831120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.831387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.831426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.831873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.831912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.832307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.832347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.832711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.832749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.833112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.833151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.833573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.833611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.833771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.833825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.834241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.371 [2024-11-19 09:49:40.834282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.371 qpair failed and we were unable to recover it. 00:31:54.371 [2024-11-19 09:49:40.834672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.834712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.835071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.835110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.835523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.835562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.835923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.835963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.836215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.836255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.836673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.836711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.837030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.837068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.837472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.837512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.837800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.837838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.838256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.838298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.838594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.838636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.839000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.839038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.839402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.839443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.839809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.839857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.840252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.840293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.840550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.840589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.840955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.840994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.841386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.841426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.841818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.841857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.842152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.842207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.842604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.842643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.842900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.842939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.843212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.843252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.843649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.843689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.844045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.844084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.844362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.844402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.844644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.844685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.845079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.845119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.845434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.845475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.845891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.845929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.846299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.846341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.846707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.846747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.847179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.847219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.847593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.847632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.847988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.848026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.848298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.848337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.848720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.848759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.372 [2024-11-19 09:49:40.849171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.372 [2024-11-19 09:49:40.849211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.372 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.849605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.849644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.850006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.850045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.850410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.850452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.850834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.850872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.851234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.851274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.851414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.851461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.851714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.851753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.852015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.852055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.852421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.852462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.852743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.852781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.853025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.853064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.853457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.853497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.853654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.853707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.854136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.854190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.854557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.854596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.854860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.854907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.855273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.855313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.855693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.855731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.856091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.856130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.856408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.856448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.856695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.856734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.857149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.857201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.857456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.857494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.857897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.857935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.858296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.858336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.858575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.858615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.859030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.859069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.859459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.859499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.859797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.859838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.860099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.860139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.860575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.860614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.860974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.861012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.861251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.861291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.861724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.861763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.862121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.862175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.862552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.862590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.862950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.862988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.373 [2024-11-19 09:49:40.863374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.373 [2024-11-19 09:49:40.863415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.373 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.863776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.863814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.864178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.864219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.864632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.864670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.865088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.865125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.865499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.865539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.865931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.865970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.866331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.866369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.866743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.866781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.867181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.867219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.867642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.867680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.868040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.868077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.868485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.868525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.868844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.868882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.869240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.869280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.869653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.869692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.869945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.869983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.870338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.870377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.870698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.870744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.871019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.871057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.871328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.871368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.871765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.871804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.872153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.872221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.872618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.872656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.873122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.873174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.873577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.873615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.873987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.874025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.874415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.874454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.874833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.874870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.875274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.875313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.875696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.875734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.374 [2024-11-19 09:49:40.876104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.374 [2024-11-19 09:49:40.876142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.374 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.876447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.876485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.876850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.876887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.877148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.877197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.877550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.877588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.877948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.877985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.878298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.878337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.878715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.878753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.879181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.879220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.879514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.879556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.879974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.880013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.880403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.880444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.880832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.880870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.881239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.881278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.881532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.881570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.881840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.881878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.882143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.882196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.882564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.882601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.882895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.882937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.883307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.883348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.883737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.883773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.884069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.884110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.884380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.884420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.884793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.884832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.885202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.885241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.885492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.885531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.885806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.885843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.886084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.886122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.886389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.886428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.886788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.886825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.887200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.887240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.887506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.887547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.887944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.887983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.888228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.888267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.888687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.888724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.889088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.889125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.889433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.889472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.889741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.889778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.375 qpair failed and we were unable to recover it. 00:31:54.375 [2024-11-19 09:49:40.890201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.375 [2024-11-19 09:49:40.890241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.890655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.890694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.891061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.891098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.891532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.891572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.891810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.891849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.892227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.892266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.892633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.892670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.892937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.892976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.893403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.893442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.893808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.893846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.894229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.894268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.894668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.894705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.895098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.895135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.895402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.895440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.895713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.895751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.896021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.896058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.896466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.896514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.896696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.896750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.897191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.897231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.897664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.897704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.898064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.898103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.898519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.898558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.898927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.898964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.899199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.899239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.899653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.899691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.900124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.900175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.900524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.900562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.900865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.900906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.901276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.901316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 09:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:54.376 [2024-11-19 09:49:40.901608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.901647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 09:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:54.376 [2024-11-19 09:49:40.902003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.902042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 09:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:54.376 [2024-11-19 09:49:40.902394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 09:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:54.376 [2024-11-19 09:49:40.902434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 09:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:54.376 [2024-11-19 09:49:40.902685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.902723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.903117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.903155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.903548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.903589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.903855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.376 [2024-11-19 09:49:40.903896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.376 qpair failed and we were unable to recover it. 00:31:54.376 [2024-11-19 09:49:40.904229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.904268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.904693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.904732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.905009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.905048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.905434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.905474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.905712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.905750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.906123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.906175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.906437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.906475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.906814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.906852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.907139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.907192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.907570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.907609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.908007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.908045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.908451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.908493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.908868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.908906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.909203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.909246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.909530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.909570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.909828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.909868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.910278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.910318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.910677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.910718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.911090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.911129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.911493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.911533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.911792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.911831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.912233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.912273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.912619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.912658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.913020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.913059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.913428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.913468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.913866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.913904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.914279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.914321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.914719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.914757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.915043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.915082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.915244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.915297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.915668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.915707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.916055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.916102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.916528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.916570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.916961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.916999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.917240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.917280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.917648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.917686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.918049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.918087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.377 [2024-11-19 09:49:40.918514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.377 [2024-11-19 09:49:40.918555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.377 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.918799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.918836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.919190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.919231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.919664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.919702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.919997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.920037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.920318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.920358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.920754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.920792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.921093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.921134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.921587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.921626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.921915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.921952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.922340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.922381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.922799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.922838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.923200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.923241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.923611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.923651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.923940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.923978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.924377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.924416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.924781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.924819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.925191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.925231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.925573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.925612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.925979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.926018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.926418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.926458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.926861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.926900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.927262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.927302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.927695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.927733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.927975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.928013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.928422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.928462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.928823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.928862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.929107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.929145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.929449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.929488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.929864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.929902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.930387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.930427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.930780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.930819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.931184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.931224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.931622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.931661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.932050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.932096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.932491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.932532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.932905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.932944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.933217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.933259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.378 [2024-11-19 09:49:40.933531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.378 [2024-11-19 09:49:40.933570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.378 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.933948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.933989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.934268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.934307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.934695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.934734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.935132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.935182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.935549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.935586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.935823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.935862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.936225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.936266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.936515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.936554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.936982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.937020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.937291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.937331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.937696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.937736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.938101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.938139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.938507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.938546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.938930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.938969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.939336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.939377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.939635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.939675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.939979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.940017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.940278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.940323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.940735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.940773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.941143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.941191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.941547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.941587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.941946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.941985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.942381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.942429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.942861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.942901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.943261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.943301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.943684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.943723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.943955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.943993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.944279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.944319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 09:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.379 [2024-11-19 09:49:40.944742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.944783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 [2024-11-19 09:49:40.945028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.945068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b9 09:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:54.379 0 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.379 09:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.379 [2024-11-19 09:49:40.945449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.379 [2024-11-19 09:49:40.945491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.379 qpair failed and we were unable to recover it. 00:31:54.380 09:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:54.380 [2024-11-19 09:49:40.945854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.945893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.946276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.946315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.946700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.946738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.947115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.947155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.947537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.947575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.947970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.948008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.948384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.948424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.948786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.948823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.949191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.949229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.949630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.949669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.950027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.950065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.950328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.950367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.950624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.950662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.951079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.951116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.951381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.951420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.951783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.951820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.952083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.952122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.952544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.952583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.952947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.952986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.953388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.953428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.953795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.953834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.954233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.954272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.954654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.954692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.955081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.955120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.955501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.955539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.955899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.955938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.956352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.956392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.956752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.956790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.957035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.957073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.957490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.957538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.957903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.957942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.958302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.958342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.958601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.958638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.958994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.959032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.959425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.959465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.959729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.959766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.960196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.380 [2024-11-19 09:49:40.960235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.380 qpair failed and we were unable to recover it. 00:31:54.380 [2024-11-19 09:49:40.960629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.960667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.961029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.961067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.961461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.961500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.961884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.961923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.962283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.962322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.962616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.962654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.962930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.962973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.963246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.963285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.963686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.963725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.964152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.964200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.964442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.964480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.964856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.964894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.965285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.965326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.965621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.965663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.965943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.965982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.966384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.966424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.966779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.966817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.967065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.967103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.967556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.967595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.967960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.967999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.968268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.968308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.968730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.968768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.969055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.969093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.969500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.969539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.969899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.969937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.970335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.970375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.970738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.970777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.971150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.971198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.971594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.971632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.971988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.972026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.972306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.972346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.972720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.972759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.972999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.973044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.973405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.973445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.973805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.973843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.974240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.974279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.974649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.974687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.974945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.381 [2024-11-19 09:49:40.974983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.381 qpair failed and we were unable to recover it. 00:31:54.381 [2024-11-19 09:49:40.975386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.975428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.975826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.975865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.976230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.976270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.976651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.976690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.976968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.977007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.977247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.977287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.977546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.977587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.977963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.978001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.978260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.978301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.978728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.978768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.979033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.979071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.979471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.979512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.979911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.979949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.980314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.980353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.980714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.980752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.981181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.981222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.981410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.981460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 Malloc0 00:31:54.382 [2024-11-19 09:49:40.981874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.981914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.982305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.982345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 09:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.382 [2024-11-19 09:49:40.982538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.982588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 09:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:54.382 [2024-11-19 09:49:40.982984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.983024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 09:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.382 [2024-11-19 09:49:40.983387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.983427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 09:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:54.382 [2024-11-19 09:49:40.983774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.983814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.984226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.984266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.984524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.984562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.984995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.985033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.985331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.985370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.985650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.985689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.986052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.986089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.986495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.986537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.986892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.986930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.987286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.987325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.987721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.987759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.988177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.988216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.988567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.988604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.382 [2024-11-19 09:49:40.988905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.382 [2024-11-19 09:49:40.988966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.382 [2024-11-19 09:49:40.989014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.382 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.989400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.989439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.989801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.989839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.989995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.990043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.990291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.990331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.990581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.990619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.990887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.990925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.991342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.991382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.991802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.991841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.992257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.992295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.992652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.992690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.992969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.993007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.993302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.993341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.993765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.993803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.994093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.994131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.994536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.994575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.994934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.994971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.995377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.995416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.995832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.995870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.996240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.996279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.996625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.996663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.996946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.996986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.997295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.997338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.997734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.997773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 09:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.383 [2024-11-19 09:49:40.998153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.998221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 09:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:54.383 [2024-11-19 09:49:40.998629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 09:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.383 [2024-11-19 09:49:40.998673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 09:49:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:54.383 [2024-11-19 09:49:40.999071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.999111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.999482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.999522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:40.999905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:40.999942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:41.000302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:41.000343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:41.000703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:41.000740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:41.001033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:41.001075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.383 qpair failed and we were unable to recover it. 00:31:54.383 [2024-11-19 09:49:41.001496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.383 [2024-11-19 09:49:41.001536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.001902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.001941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.002332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.002373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.002735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.002784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.003157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.003211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.003508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.003546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.003829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.003867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.004122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.004174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.004576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.004616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.004979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.005016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.005315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.005356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.005726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.005764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.006124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.006170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.006459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.006498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.006914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.006953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.007204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.007244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.007637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.007675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.008054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.008092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.008298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.008337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.008717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.008755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.009155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.009225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.009534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.009572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.009837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.009876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 09:49:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.384 [2024-11-19 09:49:41.010276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.010316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 09:49:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:54.384 [2024-11-19 09:49:41.010671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 09:49:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.384 [2024-11-19 09:49:41.010710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 09:49:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:54.384 [2024-11-19 09:49:41.011003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.011041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.011447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.011486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.011852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.011889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.012257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.012298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.012690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.012727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.013146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.013200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.013477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.013515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.013938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.013976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.014393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.014433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.014802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.014839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.015211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.015250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.384 qpair failed and we were unable to recover it. 00:31:54.384 [2024-11-19 09:49:41.015650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.384 [2024-11-19 09:49:41.015688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.016057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.016096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.016560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.016603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.016902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.016941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.017300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.017342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.017702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.017739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.017907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.017953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.018203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.018244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.018722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.018759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.019056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.019095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.019518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.019558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.019929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.019968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.020346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.020384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.020772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.020810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.021227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.021267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.021689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.021728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 09:49:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.385 [2024-11-19 09:49:41.022094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.022133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.022436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 09:49:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:54.385 [2024-11-19 09:49:41.022476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 09:49:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.385 [2024-11-19 09:49:41.022884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.022924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 09:49:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:54.385 [2024-11-19 09:49:41.023211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.023251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1414000b90 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.023748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.023860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.024276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.024318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.024746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.024779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.025147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.025192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.025607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.025716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.026004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.026042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.026563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.026670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.027069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.027107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.027413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.027446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.027558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.027587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.027945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.027988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.028341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.028373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.028595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.028626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.028734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.028764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.029165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:54.385 [2024-11-19 09:49:41.029196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb430c0 with addr=10.0.0.2, port=4420 00:31:54.385 qpair failed and we were unable to recover it. 00:31:54.385 [2024-11-19 09:49:41.029294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.385 09:49:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.385 09:49:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:54.385 09:49:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.386 09:49:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:54.386 [2024-11-19 09:49:41.040337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.386 [2024-11-19 09:49:41.040477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.386 [2024-11-19 09:49:41.040528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.386 [2024-11-19 09:49:41.040550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.386 [2024-11-19 09:49:41.040571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.386 [2024-11-19 09:49:41.040625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.386 qpair failed and we were unable to recover it. 00:31:54.386 09:49:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.386 09:49:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 528149 00:31:54.386 [2024-11-19 09:49:41.050069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.386 [2024-11-19 09:49:41.050187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.386 [2024-11-19 09:49:41.050218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.386 [2024-11-19 09:49:41.050234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.386 [2024-11-19 09:49:41.050246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.386 [2024-11-19 09:49:41.050277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.386 qpair failed and we were unable to recover it. 00:31:54.386 [2024-11-19 09:49:41.060044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.386 [2024-11-19 09:49:41.060129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.386 [2024-11-19 09:49:41.060151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.386 [2024-11-19 09:49:41.060167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.386 [2024-11-19 09:49:41.060177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.386 [2024-11-19 09:49:41.060199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.386 qpair failed and we were unable to recover it. 00:31:54.386 [2024-11-19 09:49:41.070041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.386 [2024-11-19 09:49:41.070118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.386 [2024-11-19 09:49:41.070135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.386 [2024-11-19 09:49:41.070142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.386 [2024-11-19 09:49:41.070149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.386 [2024-11-19 09:49:41.070171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.386 qpair failed and we were unable to recover it. 00:31:54.386 [2024-11-19 09:49:41.080052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.386 [2024-11-19 09:49:41.080138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.386 [2024-11-19 09:49:41.080155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.386 [2024-11-19 09:49:41.080167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.386 [2024-11-19 09:49:41.080175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.386 [2024-11-19 09:49:41.080192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.386 qpair failed and we were unable to recover it. 00:31:54.386 [2024-11-19 09:49:41.090025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.386 [2024-11-19 09:49:41.090100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.386 [2024-11-19 09:49:41.090117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.386 [2024-11-19 09:49:41.090124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.386 [2024-11-19 09:49:41.090131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.386 [2024-11-19 09:49:41.090148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.386 qpair failed and we were unable to recover it. 00:31:54.386 [2024-11-19 09:49:41.100066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.386 [2024-11-19 09:49:41.100136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.386 [2024-11-19 09:49:41.100164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.386 [2024-11-19 09:49:41.100172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.386 [2024-11-19 09:49:41.100179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.386 [2024-11-19 09:49:41.100197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.386 qpair failed and we were unable to recover it. 00:31:54.650 [2024-11-19 09:49:41.110047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.650 [2024-11-19 09:49:41.110113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.650 [2024-11-19 09:49:41.110130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.650 [2024-11-19 09:49:41.110138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.650 [2024-11-19 09:49:41.110144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.650 [2024-11-19 09:49:41.110167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.650 qpair failed and we were unable to recover it. 00:31:54.650 [2024-11-19 09:49:41.120129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.650 [2024-11-19 09:49:41.120213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.650 [2024-11-19 09:49:41.120230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.650 [2024-11-19 09:49:41.120238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.650 [2024-11-19 09:49:41.120245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.650 [2024-11-19 09:49:41.120262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.650 qpair failed and we were unable to recover it. 00:31:54.650 [2024-11-19 09:49:41.130147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.650 [2024-11-19 09:49:41.130214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.650 [2024-11-19 09:49:41.130231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.650 [2024-11-19 09:49:41.130239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.650 [2024-11-19 09:49:41.130247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.650 [2024-11-19 09:49:41.130264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.650 qpair failed and we were unable to recover it. 00:31:54.650 [2024-11-19 09:49:41.140164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.650 [2024-11-19 09:49:41.140277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.650 [2024-11-19 09:49:41.140294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.650 [2024-11-19 09:49:41.140302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.650 [2024-11-19 09:49:41.140314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.650 [2024-11-19 09:49:41.140331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.650 qpair failed and we were unable to recover it. 00:31:54.650 [2024-11-19 09:49:41.150221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.650 [2024-11-19 09:49:41.150310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.650 [2024-11-19 09:49:41.150326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.650 [2024-11-19 09:49:41.150333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.650 [2024-11-19 09:49:41.150340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.650 [2024-11-19 09:49:41.150357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.650 qpair failed and we were unable to recover it. 00:31:54.650 [2024-11-19 09:49:41.160247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.650 [2024-11-19 09:49:41.160321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.650 [2024-11-19 09:49:41.160337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.650 [2024-11-19 09:49:41.160344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.650 [2024-11-19 09:49:41.160351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.650 [2024-11-19 09:49:41.160368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.650 qpair failed and we were unable to recover it. 00:31:54.650 [2024-11-19 09:49:41.170264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.650 [2024-11-19 09:49:41.170323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.650 [2024-11-19 09:49:41.170340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.650 [2024-11-19 09:49:41.170348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.650 [2024-11-19 09:49:41.170355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.651 [2024-11-19 09:49:41.170371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.651 qpair failed and we were unable to recover it. 00:31:54.651 [2024-11-19 09:49:41.180269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.651 [2024-11-19 09:49:41.180339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.651 [2024-11-19 09:49:41.180355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.651 [2024-11-19 09:49:41.180363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.651 [2024-11-19 09:49:41.180370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.651 [2024-11-19 09:49:41.180386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.651 qpair failed and we were unable to recover it. 00:31:54.651 [2024-11-19 09:49:41.190288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.651 [2024-11-19 09:49:41.190396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.651 [2024-11-19 09:49:41.190413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.651 [2024-11-19 09:49:41.190420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.651 [2024-11-19 09:49:41.190427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.651 [2024-11-19 09:49:41.190444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.651 qpair failed and we were unable to recover it. 00:31:54.651 [2024-11-19 09:49:41.200267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.651 [2024-11-19 09:49:41.200345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.651 [2024-11-19 09:49:41.200362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.651 [2024-11-19 09:49:41.200370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.651 [2024-11-19 09:49:41.200376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.651 [2024-11-19 09:49:41.200393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.651 qpair failed and we were unable to recover it. 00:31:54.651 [2024-11-19 09:49:41.210273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.651 [2024-11-19 09:49:41.210349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.651 [2024-11-19 09:49:41.210366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.651 [2024-11-19 09:49:41.210374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.651 [2024-11-19 09:49:41.210381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.651 [2024-11-19 09:49:41.210398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.651 qpair failed and we were unable to recover it. 00:31:54.651 [2024-11-19 09:49:41.220534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.651 [2024-11-19 09:49:41.220603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.651 [2024-11-19 09:49:41.220620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.651 [2024-11-19 09:49:41.220627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.651 [2024-11-19 09:49:41.220634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.651 [2024-11-19 09:49:41.220650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.651 qpair failed and we were unable to recover it. 00:31:54.651 [2024-11-19 09:49:41.230486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.651 [2024-11-19 09:49:41.230565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.651 [2024-11-19 09:49:41.230588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.651 [2024-11-19 09:49:41.230595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.651 [2024-11-19 09:49:41.230602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.651 [2024-11-19 09:49:41.230618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.651 qpair failed and we were unable to recover it. 00:31:54.651 [2024-11-19 09:49:41.240532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.651 [2024-11-19 09:49:41.240604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.651 [2024-11-19 09:49:41.240621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.651 [2024-11-19 09:49:41.240628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.651 [2024-11-19 09:49:41.240635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.651 [2024-11-19 09:49:41.240652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.651 qpair failed and we were unable to recover it. 00:31:54.651 [2024-11-19 09:49:41.250561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.651 [2024-11-19 09:49:41.250627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.651 [2024-11-19 09:49:41.250644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.651 [2024-11-19 09:49:41.250651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.651 [2024-11-19 09:49:41.250658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.651 [2024-11-19 09:49:41.250673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.651 qpair failed and we were unable to recover it. 00:31:54.651 [2024-11-19 09:49:41.260530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.651 [2024-11-19 09:49:41.260602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.651 [2024-11-19 09:49:41.260620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.651 [2024-11-19 09:49:41.260627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.651 [2024-11-19 09:49:41.260634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.651 [2024-11-19 09:49:41.260650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.651 qpair failed and we were unable to recover it. 00:31:54.651 [2024-11-19 09:49:41.270438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.651 [2024-11-19 09:49:41.270510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.651 [2024-11-19 09:49:41.270527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.651 [2024-11-19 09:49:41.270534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.651 [2024-11-19 09:49:41.270546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.651 [2024-11-19 09:49:41.270563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.651 qpair failed and we were unable to recover it. 00:31:54.651 [2024-11-19 09:49:41.280559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.651 [2024-11-19 09:49:41.280633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.651 [2024-11-19 09:49:41.280651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.651 [2024-11-19 09:49:41.280659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.651 [2024-11-19 09:49:41.280665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.651 [2024-11-19 09:49:41.280682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.651 qpair failed and we were unable to recover it. 00:31:54.651 [2024-11-19 09:49:41.290639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.651 [2024-11-19 09:49:41.290706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.651 [2024-11-19 09:49:41.290723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.651 [2024-11-19 09:49:41.290730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.651 [2024-11-19 09:49:41.290737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.651 [2024-11-19 09:49:41.290753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.651 qpair failed and we were unable to recover it. 00:31:54.651 [2024-11-19 09:49:41.300648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.651 [2024-11-19 09:49:41.300720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.651 [2024-11-19 09:49:41.300737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.651 [2024-11-19 09:49:41.300745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.651 [2024-11-19 09:49:41.300752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.651 [2024-11-19 09:49:41.300769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.652 qpair failed and we were unable to recover it. 00:31:54.652 [2024-11-19 09:49:41.310691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.652 [2024-11-19 09:49:41.310794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.652 [2024-11-19 09:49:41.310811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.652 [2024-11-19 09:49:41.310818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.652 [2024-11-19 09:49:41.310824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.652 [2024-11-19 09:49:41.310841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.652 qpair failed and we were unable to recover it. 00:31:54.652 [2024-11-19 09:49:41.320609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.652 [2024-11-19 09:49:41.320682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.652 [2024-11-19 09:49:41.320698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.652 [2024-11-19 09:49:41.320705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.652 [2024-11-19 09:49:41.320712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.652 [2024-11-19 09:49:41.320727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.652 qpair failed and we were unable to recover it. 00:31:54.652 [2024-11-19 09:49:41.330709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.652 [2024-11-19 09:49:41.330777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.652 [2024-11-19 09:49:41.330795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.652 [2024-11-19 09:49:41.330802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.652 [2024-11-19 09:49:41.330808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.652 [2024-11-19 09:49:41.330824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.652 qpair failed and we were unable to recover it. 00:31:54.652 [2024-11-19 09:49:41.340807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.652 [2024-11-19 09:49:41.340876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.652 [2024-11-19 09:49:41.340894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.652 [2024-11-19 09:49:41.340901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.652 [2024-11-19 09:49:41.340908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.652 [2024-11-19 09:49:41.340925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.652 qpair failed and we were unable to recover it. 00:31:54.652 [2024-11-19 09:49:41.350839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.652 [2024-11-19 09:49:41.350944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.652 [2024-11-19 09:49:41.350961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.652 [2024-11-19 09:49:41.350969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.652 [2024-11-19 09:49:41.350975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.652 [2024-11-19 09:49:41.350992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.652 qpair failed and we were unable to recover it. 00:31:54.652 [2024-11-19 09:49:41.360831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.652 [2024-11-19 09:49:41.360906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.652 [2024-11-19 09:49:41.360950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.652 [2024-11-19 09:49:41.360960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.652 [2024-11-19 09:49:41.360967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.652 [2024-11-19 09:49:41.360993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.652 qpair failed and we were unable to recover it. 00:31:54.652 [2024-11-19 09:49:41.370854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.652 [2024-11-19 09:49:41.370930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.652 [2024-11-19 09:49:41.370966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.652 [2024-11-19 09:49:41.370976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.652 [2024-11-19 09:49:41.370984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.652 [2024-11-19 09:49:41.371008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.652 qpair failed and we were unable to recover it. 00:31:54.652 [2024-11-19 09:49:41.380853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.652 [2024-11-19 09:49:41.380923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.652 [2024-11-19 09:49:41.380943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.652 [2024-11-19 09:49:41.380951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.652 [2024-11-19 09:49:41.380958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.652 [2024-11-19 09:49:41.380977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.652 qpair failed and we were unable to recover it. 00:31:54.652 [2024-11-19 09:49:41.390899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.652 [2024-11-19 09:49:41.390968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.652 [2024-11-19 09:49:41.390986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.652 [2024-11-19 09:49:41.390993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.652 [2024-11-19 09:49:41.391000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.652 [2024-11-19 09:49:41.391017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.652 qpair failed and we were unable to recover it. 00:31:54.916 [2024-11-19 09:49:41.400954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.916 [2024-11-19 09:49:41.401034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.916 [2024-11-19 09:49:41.401052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.916 [2024-11-19 09:49:41.401059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.916 [2024-11-19 09:49:41.401083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.916 [2024-11-19 09:49:41.401101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.916 qpair failed and we were unable to recover it. 00:31:54.916 [2024-11-19 09:49:41.410949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.916 [2024-11-19 09:49:41.411021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.916 [2024-11-19 09:49:41.411039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.916 [2024-11-19 09:49:41.411046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.916 [2024-11-19 09:49:41.411053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.916 [2024-11-19 09:49:41.411070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.916 qpair failed and we were unable to recover it. 00:31:54.916 [2024-11-19 09:49:41.421002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.916 [2024-11-19 09:49:41.421064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.916 [2024-11-19 09:49:41.421081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.916 [2024-11-19 09:49:41.421088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.916 [2024-11-19 09:49:41.421095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.916 [2024-11-19 09:49:41.421112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.916 qpair failed and we were unable to recover it. 00:31:54.916 [2024-11-19 09:49:41.431009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.916 [2024-11-19 09:49:41.431082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.916 [2024-11-19 09:49:41.431099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.916 [2024-11-19 09:49:41.431106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.916 [2024-11-19 09:49:41.431113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.916 [2024-11-19 09:49:41.431129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.916 qpair failed and we were unable to recover it. 00:31:54.916 [2024-11-19 09:49:41.441087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.916 [2024-11-19 09:49:41.441168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.916 [2024-11-19 09:49:41.441186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.916 [2024-11-19 09:49:41.441194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.916 [2024-11-19 09:49:41.441200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.916 [2024-11-19 09:49:41.441217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.916 qpair failed and we were unable to recover it. 00:31:54.916 [2024-11-19 09:49:41.451077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.916 [2024-11-19 09:49:41.451141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.916 [2024-11-19 09:49:41.451162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.916 [2024-11-19 09:49:41.451170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.916 [2024-11-19 09:49:41.451177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.916 [2024-11-19 09:49:41.451193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.916 qpair failed and we were unable to recover it. 00:31:54.916 [2024-11-19 09:49:41.461109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.916 [2024-11-19 09:49:41.461180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.916 [2024-11-19 09:49:41.461196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.916 [2024-11-19 09:49:41.461204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.916 [2024-11-19 09:49:41.461211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.916 [2024-11-19 09:49:41.461227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.916 qpair failed and we were unable to recover it. 00:31:54.916 [2024-11-19 09:49:41.471138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.916 [2024-11-19 09:49:41.471212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.916 [2024-11-19 09:49:41.471229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.916 [2024-11-19 09:49:41.471236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.916 [2024-11-19 09:49:41.471243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.916 [2024-11-19 09:49:41.471260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.916 qpair failed and we were unable to recover it. 00:31:54.916 [2024-11-19 09:49:41.481211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.916 [2024-11-19 09:49:41.481289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.916 [2024-11-19 09:49:41.481305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.916 [2024-11-19 09:49:41.481312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.916 [2024-11-19 09:49:41.481319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.916 [2024-11-19 09:49:41.481335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.916 qpair failed and we were unable to recover it. 00:31:54.916 [2024-11-19 09:49:41.491217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.916 [2024-11-19 09:49:41.491286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.916 [2024-11-19 09:49:41.491308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.916 [2024-11-19 09:49:41.491316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.916 [2024-11-19 09:49:41.491322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.916 [2024-11-19 09:49:41.491339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.916 qpair failed and we were unable to recover it. 00:31:54.916 [2024-11-19 09:49:41.501232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.916 [2024-11-19 09:49:41.501295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.916 [2024-11-19 09:49:41.501311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.916 [2024-11-19 09:49:41.501318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.916 [2024-11-19 09:49:41.501325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.916 [2024-11-19 09:49:41.501341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.916 qpair failed and we were unable to recover it. 00:31:54.916 [2024-11-19 09:49:41.511246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.916 [2024-11-19 09:49:41.511314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.916 [2024-11-19 09:49:41.511331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.916 [2024-11-19 09:49:41.511338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.916 [2024-11-19 09:49:41.511344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.916 [2024-11-19 09:49:41.511361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.916 qpair failed and we were unable to recover it. 00:31:54.917 [2024-11-19 09:49:41.521332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.917 [2024-11-19 09:49:41.521410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.917 [2024-11-19 09:49:41.521425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.917 [2024-11-19 09:49:41.521433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.917 [2024-11-19 09:49:41.521440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.917 [2024-11-19 09:49:41.521457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.917 qpair failed and we were unable to recover it. 00:31:54.917 [2024-11-19 09:49:41.531316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.917 [2024-11-19 09:49:41.531384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.917 [2024-11-19 09:49:41.531403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.917 [2024-11-19 09:49:41.531411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.917 [2024-11-19 09:49:41.531423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.917 [2024-11-19 09:49:41.531442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.917 qpair failed and we were unable to recover it. 00:31:54.917 [2024-11-19 09:49:41.541235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.917 [2024-11-19 09:49:41.541306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.917 [2024-11-19 09:49:41.541323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.917 [2024-11-19 09:49:41.541330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.917 [2024-11-19 09:49:41.541338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.917 [2024-11-19 09:49:41.541355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.917 qpair failed and we were unable to recover it. 00:31:54.917 [2024-11-19 09:49:41.551379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.917 [2024-11-19 09:49:41.551447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.917 [2024-11-19 09:49:41.551464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.917 [2024-11-19 09:49:41.551473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.917 [2024-11-19 09:49:41.551480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.917 [2024-11-19 09:49:41.551497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.917 qpair failed and we were unable to recover it. 00:31:54.917 [2024-11-19 09:49:41.561438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.917 [2024-11-19 09:49:41.561553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.917 [2024-11-19 09:49:41.561570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.917 [2024-11-19 09:49:41.561577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.917 [2024-11-19 09:49:41.561584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.917 [2024-11-19 09:49:41.561600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.917 qpair failed and we were unable to recover it. 00:31:54.917 [2024-11-19 09:49:41.571489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.917 [2024-11-19 09:49:41.571572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.917 [2024-11-19 09:49:41.571589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.917 [2024-11-19 09:49:41.571597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.917 [2024-11-19 09:49:41.571603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.917 [2024-11-19 09:49:41.571620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.917 qpair failed and we were unable to recover it. 00:31:54.917 [2024-11-19 09:49:41.581469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.917 [2024-11-19 09:49:41.581531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.917 [2024-11-19 09:49:41.581548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.917 [2024-11-19 09:49:41.581555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.917 [2024-11-19 09:49:41.581562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.917 [2024-11-19 09:49:41.581578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.917 qpair failed and we were unable to recover it. 00:31:54.917 [2024-11-19 09:49:41.591516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.917 [2024-11-19 09:49:41.591603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.917 [2024-11-19 09:49:41.591619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.917 [2024-11-19 09:49:41.591627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.917 [2024-11-19 09:49:41.591633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.917 [2024-11-19 09:49:41.591650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.917 qpair failed and we were unable to recover it. 00:31:54.917 [2024-11-19 09:49:41.601571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.917 [2024-11-19 09:49:41.601651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.917 [2024-11-19 09:49:41.601667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.917 [2024-11-19 09:49:41.601674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.917 [2024-11-19 09:49:41.601682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.917 [2024-11-19 09:49:41.601698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.917 qpair failed and we were unable to recover it. 00:31:54.917 [2024-11-19 09:49:41.611529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.917 [2024-11-19 09:49:41.611586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.917 [2024-11-19 09:49:41.611603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.917 [2024-11-19 09:49:41.611611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.917 [2024-11-19 09:49:41.611617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.917 [2024-11-19 09:49:41.611634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.917 qpair failed and we were unable to recover it. 00:31:54.917 [2024-11-19 09:49:41.621591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.917 [2024-11-19 09:49:41.621669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.917 [2024-11-19 09:49:41.621691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.917 [2024-11-19 09:49:41.621698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.917 [2024-11-19 09:49:41.621705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.917 [2024-11-19 09:49:41.621721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.917 qpair failed and we were unable to recover it. 00:31:54.917 [2024-11-19 09:49:41.631571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.917 [2024-11-19 09:49:41.631650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.917 [2024-11-19 09:49:41.631667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.917 [2024-11-19 09:49:41.631674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.917 [2024-11-19 09:49:41.631681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.917 [2024-11-19 09:49:41.631697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.917 qpair failed and we were unable to recover it. 00:31:54.917 [2024-11-19 09:49:41.641738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.917 [2024-11-19 09:49:41.641804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.917 [2024-11-19 09:49:41.641821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.917 [2024-11-19 09:49:41.641828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.917 [2024-11-19 09:49:41.641834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.917 [2024-11-19 09:49:41.641851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.917 qpair failed and we were unable to recover it. 00:31:54.918 [2024-11-19 09:49:41.651613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:54.918 [2024-11-19 09:49:41.651702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:54.918 [2024-11-19 09:49:41.651719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:54.918 [2024-11-19 09:49:41.651726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:54.918 [2024-11-19 09:49:41.651733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:54.918 [2024-11-19 09:49:41.651749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:54.918 qpair failed and we were unable to recover it. 00:31:55.180 [2024-11-19 09:49:41.661703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.180 [2024-11-19 09:49:41.661762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.180 [2024-11-19 09:49:41.661781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.180 [2024-11-19 09:49:41.661788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.180 [2024-11-19 09:49:41.661801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.180 [2024-11-19 09:49:41.661818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.180 qpair failed and we were unable to recover it. 00:31:55.180 [2024-11-19 09:49:41.671737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.180 [2024-11-19 09:49:41.671863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.180 [2024-11-19 09:49:41.671880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.180 [2024-11-19 09:49:41.671888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.180 [2024-11-19 09:49:41.671894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.180 [2024-11-19 09:49:41.671910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.180 qpair failed and we were unable to recover it. 00:31:55.180 [2024-11-19 09:49:41.681816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.180 [2024-11-19 09:49:41.681896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.180 [2024-11-19 09:49:41.681932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.180 [2024-11-19 09:49:41.681942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.180 [2024-11-19 09:49:41.681949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.181 [2024-11-19 09:49:41.681974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.181 qpair failed and we were unable to recover it. 00:31:55.181 [2024-11-19 09:49:41.691753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.181 [2024-11-19 09:49:41.691826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.181 [2024-11-19 09:49:41.691865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.181 [2024-11-19 09:49:41.691875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.181 [2024-11-19 09:49:41.691883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.181 [2024-11-19 09:49:41.691907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.181 qpair failed and we were unable to recover it. 00:31:55.181 [2024-11-19 09:49:41.701842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.181 [2024-11-19 09:49:41.701959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.181 [2024-11-19 09:49:41.701997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.181 [2024-11-19 09:49:41.702006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.181 [2024-11-19 09:49:41.702013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.181 [2024-11-19 09:49:41.702037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.181 qpair failed and we were unable to recover it. 00:31:55.181 [2024-11-19 09:49:41.711818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.181 [2024-11-19 09:49:41.711886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.181 [2024-11-19 09:49:41.711907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.181 [2024-11-19 09:49:41.711915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.181 [2024-11-19 09:49:41.711922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.181 [2024-11-19 09:49:41.711940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.181 qpair failed and we were unable to recover it. 00:31:55.181 [2024-11-19 09:49:41.721926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.181 [2024-11-19 09:49:41.722007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.181 [2024-11-19 09:49:41.722025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.181 [2024-11-19 09:49:41.722032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.181 [2024-11-19 09:49:41.722039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.181 [2024-11-19 09:49:41.722056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.181 qpair failed and we were unable to recover it. 00:31:55.181 [2024-11-19 09:49:41.731791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.181 [2024-11-19 09:49:41.731859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.181 [2024-11-19 09:49:41.731876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.181 [2024-11-19 09:49:41.731883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.181 [2024-11-19 09:49:41.731890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.181 [2024-11-19 09:49:41.731906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.181 qpair failed and we were unable to recover it. 00:31:55.181 [2024-11-19 09:49:41.741964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.181 [2024-11-19 09:49:41.742037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.181 [2024-11-19 09:49:41.742074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.181 [2024-11-19 09:49:41.742083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.181 [2024-11-19 09:49:41.742091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.181 [2024-11-19 09:49:41.742116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.181 qpair failed and we were unable to recover it. 00:31:55.181 [2024-11-19 09:49:41.751947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.181 [2024-11-19 09:49:41.752018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.181 [2024-11-19 09:49:41.752045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.181 [2024-11-19 09:49:41.752053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.181 [2024-11-19 09:49:41.752060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.181 [2024-11-19 09:49:41.752079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.181 qpair failed and we were unable to recover it. 00:31:55.181 [2024-11-19 09:49:41.762030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.181 [2024-11-19 09:49:41.762108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.181 [2024-11-19 09:49:41.762125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.181 [2024-11-19 09:49:41.762132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.181 [2024-11-19 09:49:41.762139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.181 [2024-11-19 09:49:41.762156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.181 qpair failed and we were unable to recover it. 00:31:55.181 [2024-11-19 09:49:41.772040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.181 [2024-11-19 09:49:41.772108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.181 [2024-11-19 09:49:41.772125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.181 [2024-11-19 09:49:41.772132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.181 [2024-11-19 09:49:41.772139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.181 [2024-11-19 09:49:41.772156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.181 qpair failed and we were unable to recover it. 00:31:55.181 [2024-11-19 09:49:41.782074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.181 [2024-11-19 09:49:41.782133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.181 [2024-11-19 09:49:41.782150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.181 [2024-11-19 09:49:41.782157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.181 [2024-11-19 09:49:41.782168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.181 [2024-11-19 09:49:41.782186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.181 qpair failed and we were unable to recover it. 00:31:55.181 [2024-11-19 09:49:41.792105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.181 [2024-11-19 09:49:41.792207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.181 [2024-11-19 09:49:41.792224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.181 [2024-11-19 09:49:41.792231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.181 [2024-11-19 09:49:41.792244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.181 [2024-11-19 09:49:41.792261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.181 qpair failed and we were unable to recover it. 00:31:55.181 [2024-11-19 09:49:41.802163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.181 [2024-11-19 09:49:41.802263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.181 [2024-11-19 09:49:41.802280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.181 [2024-11-19 09:49:41.802287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.181 [2024-11-19 09:49:41.802294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.181 [2024-11-19 09:49:41.802310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.181 qpair failed and we were unable to recover it. 00:31:55.181 [2024-11-19 09:49:41.812173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.181 [2024-11-19 09:49:41.812239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.181 [2024-11-19 09:49:41.812256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.181 [2024-11-19 09:49:41.812263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.181 [2024-11-19 09:49:41.812270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.182 [2024-11-19 09:49:41.812286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.182 qpair failed and we were unable to recover it. 00:31:55.182 [2024-11-19 09:49:41.822491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.182 [2024-11-19 09:49:41.822597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.182 [2024-11-19 09:49:41.822613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.182 [2024-11-19 09:49:41.822621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.182 [2024-11-19 09:49:41.822627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.182 [2024-11-19 09:49:41.822643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.182 qpair failed and we were unable to recover it. 00:31:55.182 [2024-11-19 09:49:41.832219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.182 [2024-11-19 09:49:41.832286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.182 [2024-11-19 09:49:41.832302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.182 [2024-11-19 09:49:41.832310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.182 [2024-11-19 09:49:41.832316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.182 [2024-11-19 09:49:41.832333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.182 qpair failed and we were unable to recover it. 00:31:55.182 [2024-11-19 09:49:41.842287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.182 [2024-11-19 09:49:41.842354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.182 [2024-11-19 09:49:41.842371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.182 [2024-11-19 09:49:41.842379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.182 [2024-11-19 09:49:41.842385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.182 [2024-11-19 09:49:41.842402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.182 qpair failed and we were unable to recover it. 00:31:55.182 [2024-11-19 09:49:41.852313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.182 [2024-11-19 09:49:41.852374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.182 [2024-11-19 09:49:41.852390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.182 [2024-11-19 09:49:41.852397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.182 [2024-11-19 09:49:41.852404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.182 [2024-11-19 09:49:41.852420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.182 qpair failed and we were unable to recover it. 00:31:55.182 [2024-11-19 09:49:41.862320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.182 [2024-11-19 09:49:41.862383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.182 [2024-11-19 09:49:41.862399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.182 [2024-11-19 09:49:41.862406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.182 [2024-11-19 09:49:41.862413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.182 [2024-11-19 09:49:41.862429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.182 qpair failed and we were unable to recover it. 00:31:55.182 [2024-11-19 09:49:41.872349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.182 [2024-11-19 09:49:41.872425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.182 [2024-11-19 09:49:41.872442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.182 [2024-11-19 09:49:41.872449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.182 [2024-11-19 09:49:41.872456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.182 [2024-11-19 09:49:41.872472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.182 qpair failed and we were unable to recover it. 00:31:55.182 [2024-11-19 09:49:41.882394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.182 [2024-11-19 09:49:41.882466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.182 [2024-11-19 09:49:41.882488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.182 [2024-11-19 09:49:41.882495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.182 [2024-11-19 09:49:41.882502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.182 [2024-11-19 09:49:41.882519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.182 qpair failed and we were unable to recover it. 00:31:55.182 [2024-11-19 09:49:41.892391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.182 [2024-11-19 09:49:41.892489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.182 [2024-11-19 09:49:41.892506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.182 [2024-11-19 09:49:41.892513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.182 [2024-11-19 09:49:41.892520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.182 [2024-11-19 09:49:41.892537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.182 qpair failed and we were unable to recover it. 00:31:55.182 [2024-11-19 09:49:41.902415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.182 [2024-11-19 09:49:41.902476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.182 [2024-11-19 09:49:41.902493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.182 [2024-11-19 09:49:41.902500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.182 [2024-11-19 09:49:41.902506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.182 [2024-11-19 09:49:41.902523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.182 qpair failed and we were unable to recover it. 00:31:55.182 [2024-11-19 09:49:41.912447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.182 [2024-11-19 09:49:41.912517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.182 [2024-11-19 09:49:41.912534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.182 [2024-11-19 09:49:41.912541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.182 [2024-11-19 09:49:41.912548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.182 [2024-11-19 09:49:41.912564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.182 qpair failed and we were unable to recover it. 00:31:55.182 [2024-11-19 09:49:41.922495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.182 [2024-11-19 09:49:41.922570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.182 [2024-11-19 09:49:41.922588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.182 [2024-11-19 09:49:41.922595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.182 [2024-11-19 09:49:41.922607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.182 [2024-11-19 09:49:41.922625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.182 qpair failed and we were unable to recover it. 00:31:55.446 [2024-11-19 09:49:41.932462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.446 [2024-11-19 09:49:41.932529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.446 [2024-11-19 09:49:41.932546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.446 [2024-11-19 09:49:41.932553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.446 [2024-11-19 09:49:41.932560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.446 [2024-11-19 09:49:41.932576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.446 qpair failed and we were unable to recover it. 00:31:55.446 [2024-11-19 09:49:41.942485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.446 [2024-11-19 09:49:41.942553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.446 [2024-11-19 09:49:41.942569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.446 [2024-11-19 09:49:41.942577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.446 [2024-11-19 09:49:41.942583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.446 [2024-11-19 09:49:41.942600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.446 qpair failed and we were unable to recover it. 00:31:55.446 [2024-11-19 09:49:41.952569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.446 [2024-11-19 09:49:41.952635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.446 [2024-11-19 09:49:41.952652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.446 [2024-11-19 09:49:41.952660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.446 [2024-11-19 09:49:41.952666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.446 [2024-11-19 09:49:41.952682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.446 qpair failed and we were unable to recover it. 00:31:55.446 [2024-11-19 09:49:41.962591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.446 [2024-11-19 09:49:41.962675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.446 [2024-11-19 09:49:41.962692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.446 [2024-11-19 09:49:41.962700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.446 [2024-11-19 09:49:41.962706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.446 [2024-11-19 09:49:41.962723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.446 qpair failed and we were unable to recover it. 00:31:55.446 [2024-11-19 09:49:41.972625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.446 [2024-11-19 09:49:41.972720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.446 [2024-11-19 09:49:41.972737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.446 [2024-11-19 09:49:41.972744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.446 [2024-11-19 09:49:41.972751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.446 [2024-11-19 09:49:41.972767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.446 qpair failed and we were unable to recover it. 00:31:55.446 [2024-11-19 09:49:41.982638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.446 [2024-11-19 09:49:41.982698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.446 [2024-11-19 09:49:41.982715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.446 [2024-11-19 09:49:41.982722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.446 [2024-11-19 09:49:41.982729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.446 [2024-11-19 09:49:41.982745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.446 qpair failed and we were unable to recover it. 00:31:55.446 [2024-11-19 09:49:41.992688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.446 [2024-11-19 09:49:41.992758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.446 [2024-11-19 09:49:41.992774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.446 [2024-11-19 09:49:41.992781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.446 [2024-11-19 09:49:41.992788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.446 [2024-11-19 09:49:41.992804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.446 qpair failed and we were unable to recover it. 00:31:55.446 [2024-11-19 09:49:42.002729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.446 [2024-11-19 09:49:42.002793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.446 [2024-11-19 09:49:42.002811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.446 [2024-11-19 09:49:42.002819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.446 [2024-11-19 09:49:42.002826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.446 [2024-11-19 09:49:42.002842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.446 qpair failed and we were unable to recover it. 00:31:55.446 [2024-11-19 09:49:42.012735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.446 [2024-11-19 09:49:42.012805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.446 [2024-11-19 09:49:42.012827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.446 [2024-11-19 09:49:42.012835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.446 [2024-11-19 09:49:42.012841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.446 [2024-11-19 09:49:42.012857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.446 qpair failed and we were unable to recover it. 00:31:55.446 [2024-11-19 09:49:42.022766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.446 [2024-11-19 09:49:42.022859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.446 [2024-11-19 09:49:42.022882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.446 [2024-11-19 09:49:42.022890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.446 [2024-11-19 09:49:42.022896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.447 [2024-11-19 09:49:42.022916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.447 qpair failed and we were unable to recover it. 00:31:55.447 [2024-11-19 09:49:42.032767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.447 [2024-11-19 09:49:42.032841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.447 [2024-11-19 09:49:42.032878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.447 [2024-11-19 09:49:42.032887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.447 [2024-11-19 09:49:42.032896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.447 [2024-11-19 09:49:42.032919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.447 qpair failed and we were unable to recover it. 00:31:55.447 [2024-11-19 09:49:42.042855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.447 [2024-11-19 09:49:42.042930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.447 [2024-11-19 09:49:42.042967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.447 [2024-11-19 09:49:42.042977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.447 [2024-11-19 09:49:42.042985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.447 [2024-11-19 09:49:42.043009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.447 qpair failed and we were unable to recover it. 00:31:55.447 [2024-11-19 09:49:42.052859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.447 [2024-11-19 09:49:42.052923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.447 [2024-11-19 09:49:42.052943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.447 [2024-11-19 09:49:42.052951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.447 [2024-11-19 09:49:42.052972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.447 [2024-11-19 09:49:42.052991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.447 qpair failed and we were unable to recover it. 00:31:55.447 [2024-11-19 09:49:42.062896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.447 [2024-11-19 09:49:42.062961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.447 [2024-11-19 09:49:42.062979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.447 [2024-11-19 09:49:42.062987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.447 [2024-11-19 09:49:42.062993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.447 [2024-11-19 09:49:42.063011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.447 qpair failed and we were unable to recover it. 00:31:55.447 [2024-11-19 09:49:42.072935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.447 [2024-11-19 09:49:42.073002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.447 [2024-11-19 09:49:42.073020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.447 [2024-11-19 09:49:42.073028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.447 [2024-11-19 09:49:42.073034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.447 [2024-11-19 09:49:42.073051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.447 qpair failed and we were unable to recover it. 00:31:55.447 [2024-11-19 09:49:42.082984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.447 [2024-11-19 09:49:42.083062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.447 [2024-11-19 09:49:42.083083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.447 [2024-11-19 09:49:42.083091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.447 [2024-11-19 09:49:42.083101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.447 [2024-11-19 09:49:42.083121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.447 qpair failed and we were unable to recover it. 00:31:55.447 [2024-11-19 09:49:42.093025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.447 [2024-11-19 09:49:42.093132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.447 [2024-11-19 09:49:42.093152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.447 [2024-11-19 09:49:42.093170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.447 [2024-11-19 09:49:42.093177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.447 [2024-11-19 09:49:42.093195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.447 qpair failed and we were unable to recover it. 00:31:55.447 [2024-11-19 09:49:42.103018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.447 [2024-11-19 09:49:42.103086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.447 [2024-11-19 09:49:42.103104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.447 [2024-11-19 09:49:42.103111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.447 [2024-11-19 09:49:42.103118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.447 [2024-11-19 09:49:42.103134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.447 qpair failed and we were unable to recover it. 00:31:55.447 [2024-11-19 09:49:42.113049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.447 [2024-11-19 09:49:42.113115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.447 [2024-11-19 09:49:42.113132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.447 [2024-11-19 09:49:42.113140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.447 [2024-11-19 09:49:42.113146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.447 [2024-11-19 09:49:42.113173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.447 qpair failed and we were unable to recover it. 00:31:55.447 [2024-11-19 09:49:42.122997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.447 [2024-11-19 09:49:42.123077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.447 [2024-11-19 09:49:42.123094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.447 [2024-11-19 09:49:42.123101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.447 [2024-11-19 09:49:42.123108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.447 [2024-11-19 09:49:42.123124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.447 qpair failed and we were unable to recover it. 00:31:55.447 [2024-11-19 09:49:42.133141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.447 [2024-11-19 09:49:42.133250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.447 [2024-11-19 09:49:42.133267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.447 [2024-11-19 09:49:42.133274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.447 [2024-11-19 09:49:42.133280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.447 [2024-11-19 09:49:42.133297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.447 qpair failed and we were unable to recover it. 00:31:55.447 [2024-11-19 09:49:42.143154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.447 [2024-11-19 09:49:42.143233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.447 [2024-11-19 09:49:42.143258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.447 [2024-11-19 09:49:42.143266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.447 [2024-11-19 09:49:42.143272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.447 [2024-11-19 09:49:42.143290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.447 qpair failed and we were unable to recover it. 00:31:55.447 [2024-11-19 09:49:42.153171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.447 [2024-11-19 09:49:42.153248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.447 [2024-11-19 09:49:42.153266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.447 [2024-11-19 09:49:42.153273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.447 [2024-11-19 09:49:42.153280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.448 [2024-11-19 09:49:42.153297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.448 qpair failed and we were unable to recover it. 00:31:55.448 [2024-11-19 09:49:42.163284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.448 [2024-11-19 09:49:42.163351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.448 [2024-11-19 09:49:42.163369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.448 [2024-11-19 09:49:42.163376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.448 [2024-11-19 09:49:42.163383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.448 [2024-11-19 09:49:42.163399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.448 qpair failed and we were unable to recover it. 00:31:55.448 [2024-11-19 09:49:42.173208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.448 [2024-11-19 09:49:42.173274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.448 [2024-11-19 09:49:42.173290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.448 [2024-11-19 09:49:42.173297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.448 [2024-11-19 09:49:42.173304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.448 [2024-11-19 09:49:42.173321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.448 qpair failed and we were unable to recover it. 00:31:55.448 [2024-11-19 09:49:42.183275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.448 [2024-11-19 09:49:42.183339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.448 [2024-11-19 09:49:42.183355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.448 [2024-11-19 09:49:42.183363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.448 [2024-11-19 09:49:42.183375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.448 [2024-11-19 09:49:42.183391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.448 qpair failed and we were unable to recover it. 00:31:55.710 [2024-11-19 09:49:42.193270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.710 [2024-11-19 09:49:42.193339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.710 [2024-11-19 09:49:42.193357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.710 [2024-11-19 09:49:42.193365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.710 [2024-11-19 09:49:42.193371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.710 [2024-11-19 09:49:42.193388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.710 qpair failed and we were unable to recover it. 00:31:55.710 [2024-11-19 09:49:42.203357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.710 [2024-11-19 09:49:42.203426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.710 [2024-11-19 09:49:42.203443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.710 [2024-11-19 09:49:42.203450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.710 [2024-11-19 09:49:42.203456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.710 [2024-11-19 09:49:42.203473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.710 qpair failed and we were unable to recover it. 00:31:55.710 [2024-11-19 09:49:42.213429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.710 [2024-11-19 09:49:42.213514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.710 [2024-11-19 09:49:42.213530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.710 [2024-11-19 09:49:42.213538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.710 [2024-11-19 09:49:42.213544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.710 [2024-11-19 09:49:42.213561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.710 qpair failed and we were unable to recover it. 00:31:55.710 [2024-11-19 09:49:42.223311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.710 [2024-11-19 09:49:42.223370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.710 [2024-11-19 09:49:42.223387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.710 [2024-11-19 09:49:42.223394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.710 [2024-11-19 09:49:42.223400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.710 [2024-11-19 09:49:42.223416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.710 qpair failed and we were unable to recover it. 00:31:55.710 [2024-11-19 09:49:42.233419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.710 [2024-11-19 09:49:42.233534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.710 [2024-11-19 09:49:42.233552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.710 [2024-11-19 09:49:42.233559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.710 [2024-11-19 09:49:42.233566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.710 [2024-11-19 09:49:42.233583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.710 qpair failed and we were unable to recover it. 00:31:55.710 [2024-11-19 09:49:42.243487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.710 [2024-11-19 09:49:42.243567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.710 [2024-11-19 09:49:42.243582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.710 [2024-11-19 09:49:42.243589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.710 [2024-11-19 09:49:42.243596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.710 [2024-11-19 09:49:42.243612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.710 qpair failed and we were unable to recover it. 00:31:55.710 [2024-11-19 09:49:42.253501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.710 [2024-11-19 09:49:42.253567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.710 [2024-11-19 09:49:42.253584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.710 [2024-11-19 09:49:42.253591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.710 [2024-11-19 09:49:42.253598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.710 [2024-11-19 09:49:42.253615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.710 qpair failed and we were unable to recover it. 00:31:55.710 [2024-11-19 09:49:42.263406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.710 [2024-11-19 09:49:42.263470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.710 [2024-11-19 09:49:42.263487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.711 [2024-11-19 09:49:42.263494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.711 [2024-11-19 09:49:42.263501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.711 [2024-11-19 09:49:42.263518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.711 qpair failed and we were unable to recover it. 00:31:55.711 [2024-11-19 09:49:42.273487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.711 [2024-11-19 09:49:42.273549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.711 [2024-11-19 09:49:42.273571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.711 [2024-11-19 09:49:42.273578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.711 [2024-11-19 09:49:42.273584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.711 [2024-11-19 09:49:42.273601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.711 qpair failed and we were unable to recover it. 00:31:55.711 [2024-11-19 09:49:42.283577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.711 [2024-11-19 09:49:42.283648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.711 [2024-11-19 09:49:42.283665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.711 [2024-11-19 09:49:42.283672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.711 [2024-11-19 09:49:42.283679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.711 [2024-11-19 09:49:42.283695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.711 qpair failed and we were unable to recover it. 00:31:55.711 [2024-11-19 09:49:42.293531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.711 [2024-11-19 09:49:42.293591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.711 [2024-11-19 09:49:42.293607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.711 [2024-11-19 09:49:42.293614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.711 [2024-11-19 09:49:42.293621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.711 [2024-11-19 09:49:42.293637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.711 qpair failed and we were unable to recover it. 00:31:55.711 [2024-11-19 09:49:42.303577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.711 [2024-11-19 09:49:42.303643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.711 [2024-11-19 09:49:42.303658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.711 [2024-11-19 09:49:42.303666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.711 [2024-11-19 09:49:42.303672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.711 [2024-11-19 09:49:42.303687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.711 qpair failed and we were unable to recover it. 00:31:55.711 [2024-11-19 09:49:42.313451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.711 [2024-11-19 09:49:42.313508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.711 [2024-11-19 09:49:42.313526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.711 [2024-11-19 09:49:42.313534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.711 [2024-11-19 09:49:42.313545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.711 [2024-11-19 09:49:42.313562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.711 qpair failed and we were unable to recover it. 00:31:55.711 [2024-11-19 09:49:42.323654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.711 [2024-11-19 09:49:42.323720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.711 [2024-11-19 09:49:42.323737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.711 [2024-11-19 09:49:42.323744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.711 [2024-11-19 09:49:42.323750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.711 [2024-11-19 09:49:42.323767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.711 qpair failed and we were unable to recover it. 00:31:55.711 [2024-11-19 09:49:42.333634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.711 [2024-11-19 09:49:42.333697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.711 [2024-11-19 09:49:42.333712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.711 [2024-11-19 09:49:42.333720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.711 [2024-11-19 09:49:42.333729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.711 [2024-11-19 09:49:42.333746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.711 qpair failed and we were unable to recover it. 00:31:55.711 [2024-11-19 09:49:42.343659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.711 [2024-11-19 09:49:42.343711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.711 [2024-11-19 09:49:42.343726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.711 [2024-11-19 09:49:42.343733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.711 [2024-11-19 09:49:42.343739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.711 [2024-11-19 09:49:42.343754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.711 qpair failed and we were unable to recover it. 00:31:55.711 [2024-11-19 09:49:42.353643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.711 [2024-11-19 09:49:42.353695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.711 [2024-11-19 09:49:42.353710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.711 [2024-11-19 09:49:42.353717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.711 [2024-11-19 09:49:42.353723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.711 [2024-11-19 09:49:42.353737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.711 qpair failed and we were unable to recover it. 00:31:55.711 [2024-11-19 09:49:42.363730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.711 [2024-11-19 09:49:42.363795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.711 [2024-11-19 09:49:42.363810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.711 [2024-11-19 09:49:42.363817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.711 [2024-11-19 09:49:42.363825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.711 [2024-11-19 09:49:42.363841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.711 qpair failed and we were unable to recover it. 00:31:55.711 [2024-11-19 09:49:42.373748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.711 [2024-11-19 09:49:42.373803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.711 [2024-11-19 09:49:42.373818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.711 [2024-11-19 09:49:42.373825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.711 [2024-11-19 09:49:42.373831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.711 [2024-11-19 09:49:42.373845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.711 qpair failed and we were unable to recover it. 00:31:55.711 [2024-11-19 09:49:42.383757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.711 [2024-11-19 09:49:42.383845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.711 [2024-11-19 09:49:42.383859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.711 [2024-11-19 09:49:42.383865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.711 [2024-11-19 09:49:42.383872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.711 [2024-11-19 09:49:42.383886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.711 qpair failed and we were unable to recover it. 00:31:55.711 [2024-11-19 09:49:42.393750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.711 [2024-11-19 09:49:42.393804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.712 [2024-11-19 09:49:42.393818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.712 [2024-11-19 09:49:42.393825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.712 [2024-11-19 09:49:42.393831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.712 [2024-11-19 09:49:42.393844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.712 qpair failed and we were unable to recover it. 00:31:55.712 [2024-11-19 09:49:42.403845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.712 [2024-11-19 09:49:42.403901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.712 [2024-11-19 09:49:42.403918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.712 [2024-11-19 09:49:42.403925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.712 [2024-11-19 09:49:42.403931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.712 [2024-11-19 09:49:42.403945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.712 qpair failed and we were unable to recover it. 00:31:55.712 [2024-11-19 09:49:42.413807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.712 [2024-11-19 09:49:42.413855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.712 [2024-11-19 09:49:42.413868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.712 [2024-11-19 09:49:42.413875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.712 [2024-11-19 09:49:42.413881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.712 [2024-11-19 09:49:42.413895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.712 qpair failed and we were unable to recover it. 00:31:55.712 [2024-11-19 09:49:42.423866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.712 [2024-11-19 09:49:42.423914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.712 [2024-11-19 09:49:42.423927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.712 [2024-11-19 09:49:42.423934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.712 [2024-11-19 09:49:42.423940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.712 [2024-11-19 09:49:42.423953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.712 qpair failed and we were unable to recover it. 00:31:55.712 [2024-11-19 09:49:42.433860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.712 [2024-11-19 09:49:42.433914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.712 [2024-11-19 09:49:42.433940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.712 [2024-11-19 09:49:42.433949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.712 [2024-11-19 09:49:42.433956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.712 [2024-11-19 09:49:42.433975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.712 qpair failed and we were unable to recover it. 00:31:55.712 [2024-11-19 09:49:42.443903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.712 [2024-11-19 09:49:42.443958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.712 [2024-11-19 09:49:42.443983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.712 [2024-11-19 09:49:42.443992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.712 [2024-11-19 09:49:42.444003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.712 [2024-11-19 09:49:42.444023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.712 qpair failed and we were unable to recover it. 00:31:55.974 [2024-11-19 09:49:42.453936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.974 [2024-11-19 09:49:42.453992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.974 [2024-11-19 09:49:42.454008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.974 [2024-11-19 09:49:42.454016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.974 [2024-11-19 09:49:42.454022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.974 [2024-11-19 09:49:42.454037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.974 qpair failed and we were unable to recover it. 00:31:55.974 [2024-11-19 09:49:42.463954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.974 [2024-11-19 09:49:42.464004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.974 [2024-11-19 09:49:42.464018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.974 [2024-11-19 09:49:42.464025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.974 [2024-11-19 09:49:42.464031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.974 [2024-11-19 09:49:42.464045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.974 qpair failed and we were unable to recover it. 00:31:55.974 [2024-11-19 09:49:42.473953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.974 [2024-11-19 09:49:42.474000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.974 [2024-11-19 09:49:42.474013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.974 [2024-11-19 09:49:42.474020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.974 [2024-11-19 09:49:42.474027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.974 [2024-11-19 09:49:42.474041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.974 qpair failed and we were unable to recover it. 00:31:55.974 [2024-11-19 09:49:42.484021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.974 [2024-11-19 09:49:42.484107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.974 [2024-11-19 09:49:42.484121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.974 [2024-11-19 09:49:42.484128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.974 [2024-11-19 09:49:42.484134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.974 [2024-11-19 09:49:42.484148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.974 qpair failed and we were unable to recover it. 00:31:55.974 [2024-11-19 09:49:42.493920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.974 [2024-11-19 09:49:42.493968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.974 [2024-11-19 09:49:42.493981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.974 [2024-11-19 09:49:42.493988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.974 [2024-11-19 09:49:42.493994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.974 [2024-11-19 09:49:42.494008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.974 qpair failed and we were unable to recover it. 00:31:55.974 [2024-11-19 09:49:42.504073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.974 [2024-11-19 09:49:42.504123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.974 [2024-11-19 09:49:42.504136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.974 [2024-11-19 09:49:42.504143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.974 [2024-11-19 09:49:42.504150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.974 [2024-11-19 09:49:42.504169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.974 qpair failed and we were unable to recover it. 00:31:55.974 [2024-11-19 09:49:42.513934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.974 [2024-11-19 09:49:42.513982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.974 [2024-11-19 09:49:42.513994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.974 [2024-11-19 09:49:42.514001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.974 [2024-11-19 09:49:42.514007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.974 [2024-11-19 09:49:42.514021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.974 qpair failed and we were unable to recover it. 00:31:55.974 [2024-11-19 09:49:42.524131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.974 [2024-11-19 09:49:42.524241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.974 [2024-11-19 09:49:42.524254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.974 [2024-11-19 09:49:42.524261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.974 [2024-11-19 09:49:42.524267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.974 [2024-11-19 09:49:42.524281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.974 qpair failed and we were unable to recover it. 00:31:55.974 [2024-11-19 09:49:42.534147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.974 [2024-11-19 09:49:42.534198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.974 [2024-11-19 09:49:42.534215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.974 [2024-11-19 09:49:42.534222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.974 [2024-11-19 09:49:42.534228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.974 [2024-11-19 09:49:42.534242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.974 qpair failed and we were unable to recover it. 00:31:55.974 [2024-11-19 09:49:42.544188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.974 [2024-11-19 09:49:42.544235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.974 [2024-11-19 09:49:42.544248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.974 [2024-11-19 09:49:42.544255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.975 [2024-11-19 09:49:42.544261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.975 [2024-11-19 09:49:42.544276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.975 qpair failed and we were unable to recover it. 00:31:55.975 [2024-11-19 09:49:42.554143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.975 [2024-11-19 09:49:42.554196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.975 [2024-11-19 09:49:42.554209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.975 [2024-11-19 09:49:42.554216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.975 [2024-11-19 09:49:42.554222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.975 [2024-11-19 09:49:42.554236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.975 qpair failed and we were unable to recover it. 00:31:55.975 [2024-11-19 09:49:42.564250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.975 [2024-11-19 09:49:42.564308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.975 [2024-11-19 09:49:42.564321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.975 [2024-11-19 09:49:42.564328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.975 [2024-11-19 09:49:42.564334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.975 [2024-11-19 09:49:42.564347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.975 qpair failed and we were unable to recover it. 00:31:55.975 [2024-11-19 09:49:42.574257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.975 [2024-11-19 09:49:42.574303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.975 [2024-11-19 09:49:42.574316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.975 [2024-11-19 09:49:42.574323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.975 [2024-11-19 09:49:42.574332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.975 [2024-11-19 09:49:42.574345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.975 qpair failed and we were unable to recover it. 00:31:55.975 [2024-11-19 09:49:42.584290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.975 [2024-11-19 09:49:42.584341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.975 [2024-11-19 09:49:42.584354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.975 [2024-11-19 09:49:42.584361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.975 [2024-11-19 09:49:42.584367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.975 [2024-11-19 09:49:42.584381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.975 qpair failed and we were unable to recover it. 00:31:55.975 [2024-11-19 09:49:42.594289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.975 [2024-11-19 09:49:42.594336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.975 [2024-11-19 09:49:42.594349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.975 [2024-11-19 09:49:42.594356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.975 [2024-11-19 09:49:42.594363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.975 [2024-11-19 09:49:42.594377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.975 qpair failed and we were unable to recover it. 00:31:55.975 [2024-11-19 09:49:42.604338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.975 [2024-11-19 09:49:42.604389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.975 [2024-11-19 09:49:42.604402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.975 [2024-11-19 09:49:42.604408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.975 [2024-11-19 09:49:42.604415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.975 [2024-11-19 09:49:42.604428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.975 qpair failed and we were unable to recover it. 00:31:55.975 [2024-11-19 09:49:42.614379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.975 [2024-11-19 09:49:42.614433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.975 [2024-11-19 09:49:42.614446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.975 [2024-11-19 09:49:42.614453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.975 [2024-11-19 09:49:42.614460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.975 [2024-11-19 09:49:42.614473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.975 qpair failed and we were unable to recover it. 00:31:55.975 [2024-11-19 09:49:42.624388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.975 [2024-11-19 09:49:42.624434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.975 [2024-11-19 09:49:42.624447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.975 [2024-11-19 09:49:42.624454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.975 [2024-11-19 09:49:42.624460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.975 [2024-11-19 09:49:42.624474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.975 qpair failed and we were unable to recover it. 00:31:55.975 [2024-11-19 09:49:42.634467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.975 [2024-11-19 09:49:42.634553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.975 [2024-11-19 09:49:42.634566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.975 [2024-11-19 09:49:42.634573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.975 [2024-11-19 09:49:42.634580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.975 [2024-11-19 09:49:42.634593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.975 qpair failed and we were unable to recover it. 00:31:55.975 [2024-11-19 09:49:42.644470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.975 [2024-11-19 09:49:42.644525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.975 [2024-11-19 09:49:42.644538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.975 [2024-11-19 09:49:42.644545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.975 [2024-11-19 09:49:42.644551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.975 [2024-11-19 09:49:42.644565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.975 qpair failed and we were unable to recover it. 00:31:55.975 [2024-11-19 09:49:42.654502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.975 [2024-11-19 09:49:42.654549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.975 [2024-11-19 09:49:42.654562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.975 [2024-11-19 09:49:42.654569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.975 [2024-11-19 09:49:42.654575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.975 [2024-11-19 09:49:42.654588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.975 qpair failed and we were unable to recover it. 00:31:55.975 [2024-11-19 09:49:42.664488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.975 [2024-11-19 09:49:42.664537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.975 [2024-11-19 09:49:42.664553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.975 [2024-11-19 09:49:42.664561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.975 [2024-11-19 09:49:42.664567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.975 [2024-11-19 09:49:42.664580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.975 qpair failed and we were unable to recover it. 00:31:55.975 [2024-11-19 09:49:42.674480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.975 [2024-11-19 09:49:42.674552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.975 [2024-11-19 09:49:42.674565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.976 [2024-11-19 09:49:42.674571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.976 [2024-11-19 09:49:42.674578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.976 [2024-11-19 09:49:42.674591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.976 qpair failed and we were unable to recover it. 00:31:55.976 [2024-11-19 09:49:42.684582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.976 [2024-11-19 09:49:42.684629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.976 [2024-11-19 09:49:42.684643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.976 [2024-11-19 09:49:42.684650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.976 [2024-11-19 09:49:42.684656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.976 [2024-11-19 09:49:42.684669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.976 qpair failed and we were unable to recover it. 00:31:55.976 [2024-11-19 09:49:42.694592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.976 [2024-11-19 09:49:42.694638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.976 [2024-11-19 09:49:42.694651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.976 [2024-11-19 09:49:42.694658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.976 [2024-11-19 09:49:42.694664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.976 [2024-11-19 09:49:42.694678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.976 qpair failed and we were unable to recover it. 00:31:55.976 [2024-11-19 09:49:42.704583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.976 [2024-11-19 09:49:42.704632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.976 [2024-11-19 09:49:42.704645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.976 [2024-11-19 09:49:42.704652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.976 [2024-11-19 09:49:42.704664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.976 [2024-11-19 09:49:42.704678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.976 qpair failed and we were unable to recover it. 00:31:55.976 [2024-11-19 09:49:42.714584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:55.976 [2024-11-19 09:49:42.714630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:55.976 [2024-11-19 09:49:42.714642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:55.976 [2024-11-19 09:49:42.714649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.976 [2024-11-19 09:49:42.714655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:55.976 [2024-11-19 09:49:42.714668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:55.976 qpair failed and we were unable to recover it. 00:31:56.239 [2024-11-19 09:49:42.724677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.239 [2024-11-19 09:49:42.724733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.239 [2024-11-19 09:49:42.724745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.239 [2024-11-19 09:49:42.724752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.239 [2024-11-19 09:49:42.724759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.239 [2024-11-19 09:49:42.724772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.239 qpair failed and we were unable to recover it. 00:31:56.239 [2024-11-19 09:49:42.734701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.239 [2024-11-19 09:49:42.734747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.239 [2024-11-19 09:49:42.734759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.239 [2024-11-19 09:49:42.734766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.239 [2024-11-19 09:49:42.734772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.239 [2024-11-19 09:49:42.734786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.239 qpair failed and we were unable to recover it. 00:31:56.239 [2024-11-19 09:49:42.744737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.239 [2024-11-19 09:49:42.744785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.239 [2024-11-19 09:49:42.744797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.239 [2024-11-19 09:49:42.744804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.239 [2024-11-19 09:49:42.744810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.239 [2024-11-19 09:49:42.744823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.239 qpair failed and we were unable to recover it. 00:31:56.239 [2024-11-19 09:49:42.754714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.239 [2024-11-19 09:49:42.754767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.239 [2024-11-19 09:49:42.754780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.239 [2024-11-19 09:49:42.754787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.239 [2024-11-19 09:49:42.754793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.239 [2024-11-19 09:49:42.754807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.239 qpair failed and we were unable to recover it. 00:31:56.239 [2024-11-19 09:49:42.764798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.239 [2024-11-19 09:49:42.764847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.239 [2024-11-19 09:49:42.764860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.239 [2024-11-19 09:49:42.764866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.239 [2024-11-19 09:49:42.764873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.239 [2024-11-19 09:49:42.764886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.239 qpair failed and we were unable to recover it. 00:31:56.239 [2024-11-19 09:49:42.774772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.239 [2024-11-19 09:49:42.774817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.239 [2024-11-19 09:49:42.774830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.239 [2024-11-19 09:49:42.774837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.239 [2024-11-19 09:49:42.774844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.239 [2024-11-19 09:49:42.774857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.239 qpair failed and we were unable to recover it. 00:31:56.239 [2024-11-19 09:49:42.784828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.239 [2024-11-19 09:49:42.784886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.239 [2024-11-19 09:49:42.784900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.239 [2024-11-19 09:49:42.784907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.239 [2024-11-19 09:49:42.784913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.239 [2024-11-19 09:49:42.784927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.239 qpair failed and we were unable to recover it. 00:31:56.239 [2024-11-19 09:49:42.794843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.239 [2024-11-19 09:49:42.794888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.239 [2024-11-19 09:49:42.794907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.239 [2024-11-19 09:49:42.794914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.239 [2024-11-19 09:49:42.794920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.239 [2024-11-19 09:49:42.794934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.239 qpair failed and we were unable to recover it. 00:31:56.239 [2024-11-19 09:49:42.804912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.239 [2024-11-19 09:49:42.805010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.239 [2024-11-19 09:49:42.805035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.239 [2024-11-19 09:49:42.805043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.239 [2024-11-19 09:49:42.805050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.239 [2024-11-19 09:49:42.805069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.239 qpair failed and we were unable to recover it. 00:31:56.239 [2024-11-19 09:49:42.814896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.239 [2024-11-19 09:49:42.814940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.239 [2024-11-19 09:49:42.814955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.239 [2024-11-19 09:49:42.814962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.239 [2024-11-19 09:49:42.814969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.239 [2024-11-19 09:49:42.814983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.239 qpair failed and we were unable to recover it. 00:31:56.239 [2024-11-19 09:49:42.824936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.239 [2024-11-19 09:49:42.824985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.239 [2024-11-19 09:49:42.824999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.240 [2024-11-19 09:49:42.825006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.240 [2024-11-19 09:49:42.825012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.240 [2024-11-19 09:49:42.825026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.240 qpair failed and we were unable to recover it. 00:31:56.240 [2024-11-19 09:49:42.834939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.240 [2024-11-19 09:49:42.834994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.240 [2024-11-19 09:49:42.835007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.240 [2024-11-19 09:49:42.835014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.240 [2024-11-19 09:49:42.835025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.240 [2024-11-19 09:49:42.835038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.240 qpair failed and we were unable to recover it. 00:31:56.240 [2024-11-19 09:49:42.844998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.240 [2024-11-19 09:49:42.845045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.240 [2024-11-19 09:49:42.845058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.240 [2024-11-19 09:49:42.845064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.240 [2024-11-19 09:49:42.845071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.240 [2024-11-19 09:49:42.845085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.240 qpair failed and we were unable to recover it. 00:31:56.240 [2024-11-19 09:49:42.854875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.240 [2024-11-19 09:49:42.854920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.240 [2024-11-19 09:49:42.854933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.240 [2024-11-19 09:49:42.854940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.240 [2024-11-19 09:49:42.854947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.240 [2024-11-19 09:49:42.854960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.240 qpair failed and we were unable to recover it. 00:31:56.240 [2024-11-19 09:49:42.865069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.240 [2024-11-19 09:49:42.865123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.240 [2024-11-19 09:49:42.865136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.240 [2024-11-19 09:49:42.865143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.240 [2024-11-19 09:49:42.865149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.240 [2024-11-19 09:49:42.865166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.240 qpair failed and we were unable to recover it. 00:31:56.240 [2024-11-19 09:49:42.874928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.240 [2024-11-19 09:49:42.874975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.240 [2024-11-19 09:49:42.874988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.240 [2024-11-19 09:49:42.874994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.240 [2024-11-19 09:49:42.875001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.240 [2024-11-19 09:49:42.875014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.240 qpair failed and we were unable to recover it. 00:31:56.240 [2024-11-19 09:49:42.885128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.240 [2024-11-19 09:49:42.885202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.240 [2024-11-19 09:49:42.885216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.240 [2024-11-19 09:49:42.885223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.240 [2024-11-19 09:49:42.885229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.240 [2024-11-19 09:49:42.885243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.240 qpair failed and we were unable to recover it. 00:31:56.240 [2024-11-19 09:49:42.895101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.240 [2024-11-19 09:49:42.895148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.240 [2024-11-19 09:49:42.895166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.240 [2024-11-19 09:49:42.895174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.240 [2024-11-19 09:49:42.895181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.240 [2024-11-19 09:49:42.895195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.240 qpair failed and we were unable to recover it. 00:31:56.240 [2024-11-19 09:49:42.905177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.240 [2024-11-19 09:49:42.905253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.240 [2024-11-19 09:49:42.905266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.240 [2024-11-19 09:49:42.905274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.240 [2024-11-19 09:49:42.905280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.240 [2024-11-19 09:49:42.905293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.240 qpair failed and we were unable to recover it. 00:31:56.240 [2024-11-19 09:49:42.915105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.240 [2024-11-19 09:49:42.915149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.240 [2024-11-19 09:49:42.915167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.240 [2024-11-19 09:49:42.915175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.240 [2024-11-19 09:49:42.915181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.240 [2024-11-19 09:49:42.915194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.240 qpair failed and we were unable to recover it. 00:31:56.240 [2024-11-19 09:49:42.925206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.240 [2024-11-19 09:49:42.925257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.240 [2024-11-19 09:49:42.925274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.240 [2024-11-19 09:49:42.925281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.240 [2024-11-19 09:49:42.925287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.240 [2024-11-19 09:49:42.925301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.240 qpair failed and we were unable to recover it. 00:31:56.240 [2024-11-19 09:49:42.935185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.240 [2024-11-19 09:49:42.935276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.240 [2024-11-19 09:49:42.935289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.240 [2024-11-19 09:49:42.935296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.240 [2024-11-19 09:49:42.935302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.240 [2024-11-19 09:49:42.935316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.240 qpair failed and we were unable to recover it. 00:31:56.240 [2024-11-19 09:49:42.945258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.240 [2024-11-19 09:49:42.945305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.240 [2024-11-19 09:49:42.945318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.240 [2024-11-19 09:49:42.945325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.240 [2024-11-19 09:49:42.945331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.240 [2024-11-19 09:49:42.945344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.240 qpair failed and we were unable to recover it. 00:31:56.240 [2024-11-19 09:49:42.955305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.240 [2024-11-19 09:49:42.955348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.241 [2024-11-19 09:49:42.955361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.241 [2024-11-19 09:49:42.955368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.241 [2024-11-19 09:49:42.955374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.241 [2024-11-19 09:49:42.955388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.241 qpair failed and we were unable to recover it. 00:31:56.241 [2024-11-19 09:49:42.965351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.241 [2024-11-19 09:49:42.965400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.241 [2024-11-19 09:49:42.965413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.241 [2024-11-19 09:49:42.965419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.241 [2024-11-19 09:49:42.965429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.241 [2024-11-19 09:49:42.965443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.241 qpair failed and we were unable to recover it. 00:31:56.241 [2024-11-19 09:49:42.975318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.241 [2024-11-19 09:49:42.975359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.241 [2024-11-19 09:49:42.975372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.241 [2024-11-19 09:49:42.975379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.241 [2024-11-19 09:49:42.975385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.241 [2024-11-19 09:49:42.975399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.241 qpair failed and we were unable to recover it. 00:31:56.503 [2024-11-19 09:49:42.985400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.503 [2024-11-19 09:49:42.985450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.503 [2024-11-19 09:49:42.985463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.503 [2024-11-19 09:49:42.985470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.503 [2024-11-19 09:49:42.985476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.503 [2024-11-19 09:49:42.985490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.503 qpair failed and we were unable to recover it. 00:31:56.503 [2024-11-19 09:49:42.995387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.503 [2024-11-19 09:49:42.995431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.503 [2024-11-19 09:49:42.995444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.503 [2024-11-19 09:49:42.995451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.503 [2024-11-19 09:49:42.995457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.503 [2024-11-19 09:49:42.995470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.503 qpair failed and we were unable to recover it. 00:31:56.503 [2024-11-19 09:49:43.005449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.503 [2024-11-19 09:49:43.005501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.503 [2024-11-19 09:49:43.005514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.504 [2024-11-19 09:49:43.005521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.504 [2024-11-19 09:49:43.005527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.504 [2024-11-19 09:49:43.005541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.504 qpair failed and we were unable to recover it. 00:31:56.504 [2024-11-19 09:49:43.015435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.504 [2024-11-19 09:49:43.015508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.504 [2024-11-19 09:49:43.015521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.504 [2024-11-19 09:49:43.015528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.504 [2024-11-19 09:49:43.015534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.504 [2024-11-19 09:49:43.015548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.504 qpair failed and we were unable to recover it. 00:31:56.504 [2024-11-19 09:49:43.025492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.504 [2024-11-19 09:49:43.025547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.504 [2024-11-19 09:49:43.025562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.504 [2024-11-19 09:49:43.025569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.504 [2024-11-19 09:49:43.025576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.504 [2024-11-19 09:49:43.025590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.504 qpair failed and we were unable to recover it. 00:31:56.504 [2024-11-19 09:49:43.035451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.504 [2024-11-19 09:49:43.035541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.504 [2024-11-19 09:49:43.035554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.504 [2024-11-19 09:49:43.035561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.504 [2024-11-19 09:49:43.035567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.504 [2024-11-19 09:49:43.035580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.504 qpair failed and we were unable to recover it. 00:31:56.504 [2024-11-19 09:49:43.045572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.504 [2024-11-19 09:49:43.045627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.504 [2024-11-19 09:49:43.045640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.504 [2024-11-19 09:49:43.045647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.504 [2024-11-19 09:49:43.045654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.504 [2024-11-19 09:49:43.045667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.504 qpair failed and we were unable to recover it. 00:31:56.504 [2024-11-19 09:49:43.055571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.504 [2024-11-19 09:49:43.055615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.504 [2024-11-19 09:49:43.055631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.504 [2024-11-19 09:49:43.055639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.504 [2024-11-19 09:49:43.055645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.504 [2024-11-19 09:49:43.055659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.504 qpair failed and we were unable to recover it. 00:31:56.504 [2024-11-19 09:49:43.065571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.504 [2024-11-19 09:49:43.065618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.504 [2024-11-19 09:49:43.065632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.504 [2024-11-19 09:49:43.065638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.504 [2024-11-19 09:49:43.065645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.504 [2024-11-19 09:49:43.065658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.504 qpair failed and we were unable to recover it. 00:31:56.504 [2024-11-19 09:49:43.075483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.504 [2024-11-19 09:49:43.075540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.504 [2024-11-19 09:49:43.075553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.504 [2024-11-19 09:49:43.075560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.504 [2024-11-19 09:49:43.075566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.504 [2024-11-19 09:49:43.075579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.504 qpair failed and we were unable to recover it. 00:31:56.504 [2024-11-19 09:49:43.085627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.504 [2024-11-19 09:49:43.085680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.504 [2024-11-19 09:49:43.085694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.504 [2024-11-19 09:49:43.085701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.504 [2024-11-19 09:49:43.085707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.504 [2024-11-19 09:49:43.085720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.504 qpair failed and we were unable to recover it. 00:31:56.504 [2024-11-19 09:49:43.095638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.504 [2024-11-19 09:49:43.095684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.504 [2024-11-19 09:49:43.095697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.504 [2024-11-19 09:49:43.095704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.504 [2024-11-19 09:49:43.095714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.504 [2024-11-19 09:49:43.095727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.504 qpair failed and we were unable to recover it. 00:31:56.504 [2024-11-19 09:49:43.105698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.504 [2024-11-19 09:49:43.105745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.504 [2024-11-19 09:49:43.105758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.504 [2024-11-19 09:49:43.105765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.504 [2024-11-19 09:49:43.105771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.504 [2024-11-19 09:49:43.105784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.504 qpair failed and we were unable to recover it. 00:31:56.504 [2024-11-19 09:49:43.115680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.504 [2024-11-19 09:49:43.115729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.504 [2024-11-19 09:49:43.115744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.504 [2024-11-19 09:49:43.115751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.504 [2024-11-19 09:49:43.115757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.504 [2024-11-19 09:49:43.115775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.504 qpair failed and we were unable to recover it. 00:31:56.504 [2024-11-19 09:49:43.125759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.504 [2024-11-19 09:49:43.125844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.504 [2024-11-19 09:49:43.125857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.504 [2024-11-19 09:49:43.125864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.504 [2024-11-19 09:49:43.125871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.504 [2024-11-19 09:49:43.125884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.504 qpair failed and we were unable to recover it. 00:31:56.504 [2024-11-19 09:49:43.135739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.504 [2024-11-19 09:49:43.135785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.505 [2024-11-19 09:49:43.135798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.505 [2024-11-19 09:49:43.135805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.505 [2024-11-19 09:49:43.135811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.505 [2024-11-19 09:49:43.135824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.505 qpair failed and we were unable to recover it. 00:31:56.505 [2024-11-19 09:49:43.145808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.505 [2024-11-19 09:49:43.145852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.505 [2024-11-19 09:49:43.145865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.505 [2024-11-19 09:49:43.145872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.505 [2024-11-19 09:49:43.145879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.505 [2024-11-19 09:49:43.145892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.505 qpair failed and we were unable to recover it. 00:31:56.505 [2024-11-19 09:49:43.155798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.505 [2024-11-19 09:49:43.155854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.505 [2024-11-19 09:49:43.155879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.505 [2024-11-19 09:49:43.155888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.505 [2024-11-19 09:49:43.155894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.505 [2024-11-19 09:49:43.155913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.505 qpair failed and we were unable to recover it. 00:31:56.505 [2024-11-19 09:49:43.165874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.505 [2024-11-19 09:49:43.165929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.505 [2024-11-19 09:49:43.165944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.505 [2024-11-19 09:49:43.165951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.505 [2024-11-19 09:49:43.165958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.505 [2024-11-19 09:49:43.165973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.505 qpair failed and we were unable to recover it. 00:31:56.505 [2024-11-19 09:49:43.175843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.505 [2024-11-19 09:49:43.175892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.505 [2024-11-19 09:49:43.175906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.505 [2024-11-19 09:49:43.175913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.505 [2024-11-19 09:49:43.175919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.505 [2024-11-19 09:49:43.175933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.505 qpair failed and we were unable to recover it. 00:31:56.505 [2024-11-19 09:49:43.185902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.505 [2024-11-19 09:49:43.185949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.505 [2024-11-19 09:49:43.185966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.505 [2024-11-19 09:49:43.185973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.505 [2024-11-19 09:49:43.185980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.505 [2024-11-19 09:49:43.185994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.505 qpair failed and we were unable to recover it. 00:31:56.505 [2024-11-19 09:49:43.195867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.505 [2024-11-19 09:49:43.195910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.505 [2024-11-19 09:49:43.195924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.505 [2024-11-19 09:49:43.195931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.505 [2024-11-19 09:49:43.195937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.505 [2024-11-19 09:49:43.195950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.505 qpair failed and we were unable to recover it. 00:31:56.505 [2024-11-19 09:49:43.205861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.505 [2024-11-19 09:49:43.205913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.505 [2024-11-19 09:49:43.205928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.505 [2024-11-19 09:49:43.205935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.505 [2024-11-19 09:49:43.205942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.505 [2024-11-19 09:49:43.205956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.505 qpair failed and we were unable to recover it. 00:31:56.505 [2024-11-19 09:49:43.215837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.505 [2024-11-19 09:49:43.215893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.505 [2024-11-19 09:49:43.215906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.505 [2024-11-19 09:49:43.215913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.505 [2024-11-19 09:49:43.215920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.505 [2024-11-19 09:49:43.215934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.505 qpair failed and we were unable to recover it. 00:31:56.505 [2024-11-19 09:49:43.226112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.505 [2024-11-19 09:49:43.226171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.505 [2024-11-19 09:49:43.226185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.505 [2024-11-19 09:49:43.226192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.505 [2024-11-19 09:49:43.226202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.505 [2024-11-19 09:49:43.226216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.505 qpair failed and we were unable to recover it. 00:31:56.505 [2024-11-19 09:49:43.236025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.505 [2024-11-19 09:49:43.236083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.505 [2024-11-19 09:49:43.236096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.505 [2024-11-19 09:49:43.236104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.505 [2024-11-19 09:49:43.236110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.505 [2024-11-19 09:49:43.236123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.505 qpair failed and we were unable to recover it. 00:31:56.505 [2024-11-19 09:49:43.246125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.505 [2024-11-19 09:49:43.246184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.505 [2024-11-19 09:49:43.246197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.505 [2024-11-19 09:49:43.246204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.505 [2024-11-19 09:49:43.246210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.505 [2024-11-19 09:49:43.246225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.505 qpair failed and we were unable to recover it. 00:31:56.769 [2024-11-19 09:49:43.256088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.769 [2024-11-19 09:49:43.256139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.769 [2024-11-19 09:49:43.256151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.769 [2024-11-19 09:49:43.256163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.769 [2024-11-19 09:49:43.256170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.769 [2024-11-19 09:49:43.256183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.769 qpair failed and we were unable to recover it. 00:31:56.769 [2024-11-19 09:49:43.266135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.769 [2024-11-19 09:49:43.266185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.769 [2024-11-19 09:49:43.266198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.769 [2024-11-19 09:49:43.266205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.769 [2024-11-19 09:49:43.266211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.769 [2024-11-19 09:49:43.266225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.769 qpair failed and we were unable to recover it. 00:31:56.769 [2024-11-19 09:49:43.276117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.769 [2024-11-19 09:49:43.276179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.769 [2024-11-19 09:49:43.276193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.769 [2024-11-19 09:49:43.276200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.769 [2024-11-19 09:49:43.276206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.769 [2024-11-19 09:49:43.276220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.769 qpair failed and we were unable to recover it. 00:31:56.769 [2024-11-19 09:49:43.286162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.769 [2024-11-19 09:49:43.286214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.769 [2024-11-19 09:49:43.286227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.769 [2024-11-19 09:49:43.286234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.769 [2024-11-19 09:49:43.286240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.769 [2024-11-19 09:49:43.286253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.769 qpair failed and we were unable to recover it. 00:31:56.769 [2024-11-19 09:49:43.296172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.769 [2024-11-19 09:49:43.296219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.769 [2024-11-19 09:49:43.296232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.769 [2024-11-19 09:49:43.296239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.769 [2024-11-19 09:49:43.296246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.769 [2024-11-19 09:49:43.296260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.769 qpair failed and we were unable to recover it. 00:31:56.769 [2024-11-19 09:49:43.306237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.769 [2024-11-19 09:49:43.306282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.769 [2024-11-19 09:49:43.306296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.769 [2024-11-19 09:49:43.306303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.769 [2024-11-19 09:49:43.306309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.769 [2024-11-19 09:49:43.306323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.769 qpair failed and we were unable to recover it. 00:31:56.769 [2024-11-19 09:49:43.316228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.769 [2024-11-19 09:49:43.316275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.769 [2024-11-19 09:49:43.316291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.769 [2024-11-19 09:49:43.316299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.769 [2024-11-19 09:49:43.316305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.769 [2024-11-19 09:49:43.316319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.769 qpair failed and we were unable to recover it. 00:31:56.769 [2024-11-19 09:49:43.326250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.769 [2024-11-19 09:49:43.326302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.769 [2024-11-19 09:49:43.326314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.769 [2024-11-19 09:49:43.326321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.769 [2024-11-19 09:49:43.326327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.769 [2024-11-19 09:49:43.326341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.769 qpair failed and we were unable to recover it. 00:31:56.769 [2024-11-19 09:49:43.336252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.769 [2024-11-19 09:49:43.336294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.769 [2024-11-19 09:49:43.336307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.769 [2024-11-19 09:49:43.336314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.769 [2024-11-19 09:49:43.336320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.769 [2024-11-19 09:49:43.336334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.769 qpair failed and we were unable to recover it. 00:31:56.769 [2024-11-19 09:49:43.346329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.769 [2024-11-19 09:49:43.346428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.769 [2024-11-19 09:49:43.346441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.769 [2024-11-19 09:49:43.346448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.769 [2024-11-19 09:49:43.346454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.769 [2024-11-19 09:49:43.346467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.769 qpair failed and we were unable to recover it. 00:31:56.769 [2024-11-19 09:49:43.356331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.769 [2024-11-19 09:49:43.356380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.769 [2024-11-19 09:49:43.356393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.769 [2024-11-19 09:49:43.356400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.769 [2024-11-19 09:49:43.356414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.769 [2024-11-19 09:49:43.356428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.769 qpair failed and we were unable to recover it. 00:31:56.769 [2024-11-19 09:49:43.366377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.769 [2024-11-19 09:49:43.366432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.769 [2024-11-19 09:49:43.366445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.769 [2024-11-19 09:49:43.366452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.769 [2024-11-19 09:49:43.366458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.769 [2024-11-19 09:49:43.366471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.769 qpair failed and we were unable to recover it. 00:31:56.770 [2024-11-19 09:49:43.376380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.770 [2024-11-19 09:49:43.376430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.770 [2024-11-19 09:49:43.376443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.770 [2024-11-19 09:49:43.376450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.770 [2024-11-19 09:49:43.376456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.770 [2024-11-19 09:49:43.376469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.770 qpair failed and we were unable to recover it. 00:31:56.770 [2024-11-19 09:49:43.386448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.770 [2024-11-19 09:49:43.386500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.770 [2024-11-19 09:49:43.386513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.770 [2024-11-19 09:49:43.386520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.770 [2024-11-19 09:49:43.386526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.770 [2024-11-19 09:49:43.386540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.770 qpair failed and we were unable to recover it. 00:31:56.770 [2024-11-19 09:49:43.396447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.770 [2024-11-19 09:49:43.396493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.770 [2024-11-19 09:49:43.396505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.770 [2024-11-19 09:49:43.396512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.770 [2024-11-19 09:49:43.396518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.770 [2024-11-19 09:49:43.396532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.770 qpair failed and we were unable to recover it. 00:31:56.770 [2024-11-19 09:49:43.406443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.770 [2024-11-19 09:49:43.406485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.770 [2024-11-19 09:49:43.406498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.770 [2024-11-19 09:49:43.406505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.770 [2024-11-19 09:49:43.406511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.770 [2024-11-19 09:49:43.406524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.770 qpair failed and we were unable to recover it. 00:31:56.770 [2024-11-19 09:49:43.416566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.770 [2024-11-19 09:49:43.416610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.770 [2024-11-19 09:49:43.416622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.770 [2024-11-19 09:49:43.416629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.770 [2024-11-19 09:49:43.416636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.770 [2024-11-19 09:49:43.416649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.770 qpair failed and we were unable to recover it. 00:31:56.770 [2024-11-19 09:49:43.426548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.770 [2024-11-19 09:49:43.426596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.770 [2024-11-19 09:49:43.426609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.770 [2024-11-19 09:49:43.426616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.770 [2024-11-19 09:49:43.426622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.770 [2024-11-19 09:49:43.426636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.770 qpair failed and we were unable to recover it. 00:31:56.770 [2024-11-19 09:49:43.436542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.770 [2024-11-19 09:49:43.436590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.770 [2024-11-19 09:49:43.436602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.770 [2024-11-19 09:49:43.436609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.770 [2024-11-19 09:49:43.436616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.770 [2024-11-19 09:49:43.436629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.770 qpair failed and we were unable to recover it. 00:31:56.770 [2024-11-19 09:49:43.446571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.770 [2024-11-19 09:49:43.446620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.770 [2024-11-19 09:49:43.446636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.770 [2024-11-19 09:49:43.446643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.770 [2024-11-19 09:49:43.446649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.770 [2024-11-19 09:49:43.446662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.770 qpair failed and we were unable to recover it. 00:31:56.770 [2024-11-19 09:49:43.456607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.770 [2024-11-19 09:49:43.456651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.770 [2024-11-19 09:49:43.456663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.770 [2024-11-19 09:49:43.456670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.770 [2024-11-19 09:49:43.456676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.770 [2024-11-19 09:49:43.456690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.770 qpair failed and we were unable to recover it. 00:31:56.770 [2024-11-19 09:49:43.466619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.770 [2024-11-19 09:49:43.466699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.770 [2024-11-19 09:49:43.466712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.770 [2024-11-19 09:49:43.466719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.770 [2024-11-19 09:49:43.466726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.770 [2024-11-19 09:49:43.466739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.770 qpair failed and we were unable to recover it. 00:31:56.770 [2024-11-19 09:49:43.476612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.770 [2024-11-19 09:49:43.476656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.770 [2024-11-19 09:49:43.476669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.770 [2024-11-19 09:49:43.476676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.770 [2024-11-19 09:49:43.476683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.770 [2024-11-19 09:49:43.476696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.770 qpair failed and we were unable to recover it. 00:31:56.770 [2024-11-19 09:49:43.486682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.770 [2024-11-19 09:49:43.486727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.770 [2024-11-19 09:49:43.486740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.770 [2024-11-19 09:49:43.486747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.770 [2024-11-19 09:49:43.486757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.770 [2024-11-19 09:49:43.486770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.770 qpair failed and we were unable to recover it. 00:31:56.770 [2024-11-19 09:49:43.496698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.770 [2024-11-19 09:49:43.496746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.770 [2024-11-19 09:49:43.496759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.770 [2024-11-19 09:49:43.496765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.770 [2024-11-19 09:49:43.496772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.770 [2024-11-19 09:49:43.496785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.770 qpair failed and we were unable to recover it. 00:31:56.771 [2024-11-19 09:49:43.506764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:56.771 [2024-11-19 09:49:43.506807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:56.771 [2024-11-19 09:49:43.506820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:56.771 [2024-11-19 09:49:43.506827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:56.771 [2024-11-19 09:49:43.506833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:56.771 [2024-11-19 09:49:43.506846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:56.771 qpair failed and we were unable to recover it. 00:31:57.032 [2024-11-19 09:49:43.516765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.032 [2024-11-19 09:49:43.516825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.032 [2024-11-19 09:49:43.516849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.033 [2024-11-19 09:49:43.516858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.033 [2024-11-19 09:49:43.516865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.033 [2024-11-19 09:49:43.516884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.033 qpair failed and we were unable to recover it. 00:31:57.033 [2024-11-19 09:49:43.526782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.033 [2024-11-19 09:49:43.526832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.033 [2024-11-19 09:49:43.526857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.033 [2024-11-19 09:49:43.526866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.033 [2024-11-19 09:49:43.526873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.033 [2024-11-19 09:49:43.526891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.033 qpair failed and we were unable to recover it. 00:31:57.033 [2024-11-19 09:49:43.536819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.033 [2024-11-19 09:49:43.536868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.033 [2024-11-19 09:49:43.536892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.033 [2024-11-19 09:49:43.536901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.033 [2024-11-19 09:49:43.536908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.033 [2024-11-19 09:49:43.536926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.033 qpair failed and we were unable to recover it. 00:31:57.033 [2024-11-19 09:49:43.546897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.033 [2024-11-19 09:49:43.546946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.033 [2024-11-19 09:49:43.546961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.033 [2024-11-19 09:49:43.546968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.033 [2024-11-19 09:49:43.546975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.033 [2024-11-19 09:49:43.546990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.033 qpair failed and we were unable to recover it. 00:31:57.033 [2024-11-19 09:49:43.556886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.033 [2024-11-19 09:49:43.556983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.033 [2024-11-19 09:49:43.556997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.033 [2024-11-19 09:49:43.557003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.033 [2024-11-19 09:49:43.557010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.033 [2024-11-19 09:49:43.557024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.033 qpair failed and we were unable to recover it. 00:31:57.033 [2024-11-19 09:49:43.566897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.033 [2024-11-19 09:49:43.566942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.033 [2024-11-19 09:49:43.566955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.033 [2024-11-19 09:49:43.566962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.033 [2024-11-19 09:49:43.566968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.033 [2024-11-19 09:49:43.566983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.033 qpair failed and we were unable to recover it. 00:31:57.033 [2024-11-19 09:49:43.576919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.033 [2024-11-19 09:49:43.576971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.033 [2024-11-19 09:49:43.576989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.033 [2024-11-19 09:49:43.576996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.033 [2024-11-19 09:49:43.577002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.033 [2024-11-19 09:49:43.577016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.033 qpair failed and we were unable to recover it. 00:31:57.033 [2024-11-19 09:49:43.586981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.033 [2024-11-19 09:49:43.587026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.033 [2024-11-19 09:49:43.587039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.033 [2024-11-19 09:49:43.587046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.033 [2024-11-19 09:49:43.587052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.033 [2024-11-19 09:49:43.587066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.033 qpair failed and we were unable to recover it. 00:31:57.033 [2024-11-19 09:49:43.596969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.033 [2024-11-19 09:49:43.597016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.033 [2024-11-19 09:49:43.597030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.033 [2024-11-19 09:49:43.597037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.033 [2024-11-19 09:49:43.597043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.033 [2024-11-19 09:49:43.597057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.033 qpair failed and we were unable to recover it. 00:31:57.033 [2024-11-19 09:49:43.606986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.033 [2024-11-19 09:49:43.607033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.033 [2024-11-19 09:49:43.607046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.033 [2024-11-19 09:49:43.607053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.033 [2024-11-19 09:49:43.607059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.033 [2024-11-19 09:49:43.607073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.033 qpair failed and we were unable to recover it. 00:31:57.033 [2024-11-19 09:49:43.617032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.033 [2024-11-19 09:49:43.617075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.033 [2024-11-19 09:49:43.617088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.033 [2024-11-19 09:49:43.617095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.033 [2024-11-19 09:49:43.617104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.033 [2024-11-19 09:49:43.617118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.033 qpair failed and we were unable to recover it. 00:31:57.033 [2024-11-19 09:49:43.627100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.033 [2024-11-19 09:49:43.627143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.033 [2024-11-19 09:49:43.627156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.033 [2024-11-19 09:49:43.627168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.033 [2024-11-19 09:49:43.627174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.033 [2024-11-19 09:49:43.627188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.033 qpair failed and we were unable to recover it. 00:31:57.033 [2024-11-19 09:49:43.637084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.033 [2024-11-19 09:49:43.637136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.033 [2024-11-19 09:49:43.637148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.033 [2024-11-19 09:49:43.637156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.033 [2024-11-19 09:49:43.637168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.033 [2024-11-19 09:49:43.637182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.033 qpair failed and we were unable to recover it. 00:31:57.033 [2024-11-19 09:49:43.647123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.034 [2024-11-19 09:49:43.647173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.034 [2024-11-19 09:49:43.647186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.034 [2024-11-19 09:49:43.647193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.034 [2024-11-19 09:49:43.647200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.034 [2024-11-19 09:49:43.647214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.034 qpair failed and we were unable to recover it. 00:31:57.034 [2024-11-19 09:49:43.657105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.034 [2024-11-19 09:49:43.657179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.034 [2024-11-19 09:49:43.657192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.034 [2024-11-19 09:49:43.657199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.034 [2024-11-19 09:49:43.657205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.034 [2024-11-19 09:49:43.657219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.034 qpair failed and we were unable to recover it. 00:31:57.034 [2024-11-19 09:49:43.667088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.034 [2024-11-19 09:49:43.667145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.034 [2024-11-19 09:49:43.667164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.034 [2024-11-19 09:49:43.667171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.034 [2024-11-19 09:49:43.667177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.034 [2024-11-19 09:49:43.667191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.034 qpair failed and we were unable to recover it. 00:31:57.034 [2024-11-19 09:49:43.677213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.034 [2024-11-19 09:49:43.677261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.034 [2024-11-19 09:49:43.677274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.034 [2024-11-19 09:49:43.677281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.034 [2024-11-19 09:49:43.677287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.034 [2024-11-19 09:49:43.677300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.034 qpair failed and we were unable to recover it. 00:31:57.034 [2024-11-19 09:49:43.687238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.034 [2024-11-19 09:49:43.687289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.034 [2024-11-19 09:49:43.687302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.034 [2024-11-19 09:49:43.687309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.034 [2024-11-19 09:49:43.687316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.034 [2024-11-19 09:49:43.687331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.034 qpair failed and we were unable to recover it. 00:31:57.034 [2024-11-19 09:49:43.697253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.034 [2024-11-19 09:49:43.697323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.034 [2024-11-19 09:49:43.697336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.034 [2024-11-19 09:49:43.697343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.034 [2024-11-19 09:49:43.697350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.034 [2024-11-19 09:49:43.697363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.034 qpair failed and we were unable to recover it. 00:31:57.034 [2024-11-19 09:49:43.707381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.034 [2024-11-19 09:49:43.707434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.034 [2024-11-19 09:49:43.707450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.034 [2024-11-19 09:49:43.707457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.034 [2024-11-19 09:49:43.707464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.034 [2024-11-19 09:49:43.707477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.034 qpair failed and we were unable to recover it. 00:31:57.034 [2024-11-19 09:49:43.717298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.034 [2024-11-19 09:49:43.717344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.034 [2024-11-19 09:49:43.717357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.034 [2024-11-19 09:49:43.717370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.034 [2024-11-19 09:49:43.717377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.034 [2024-11-19 09:49:43.717391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.034 qpair failed and we were unable to recover it. 00:31:57.034 [2024-11-19 09:49:43.727231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.034 [2024-11-19 09:49:43.727280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.034 [2024-11-19 09:49:43.727292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.034 [2024-11-19 09:49:43.727299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.034 [2024-11-19 09:49:43.727306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.034 [2024-11-19 09:49:43.727320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.034 qpair failed and we were unable to recover it. 00:31:57.034 [2024-11-19 09:49:43.737380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.034 [2024-11-19 09:49:43.737423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.034 [2024-11-19 09:49:43.737436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.034 [2024-11-19 09:49:43.737442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.034 [2024-11-19 09:49:43.737449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.034 [2024-11-19 09:49:43.737462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.034 qpair failed and we were unable to recover it. 00:31:57.034 [2024-11-19 09:49:43.747429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.034 [2024-11-19 09:49:43.747475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.034 [2024-11-19 09:49:43.747488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.034 [2024-11-19 09:49:43.747495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.034 [2024-11-19 09:49:43.747505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.034 [2024-11-19 09:49:43.747519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.034 qpair failed and we were unable to recover it. 00:31:57.034 [2024-11-19 09:49:43.757435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.034 [2024-11-19 09:49:43.757507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.034 [2024-11-19 09:49:43.757520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.034 [2024-11-19 09:49:43.757526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.034 [2024-11-19 09:49:43.757533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.034 [2024-11-19 09:49:43.757546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.034 qpair failed and we were unable to recover it. 00:31:57.034 [2024-11-19 09:49:43.767444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.034 [2024-11-19 09:49:43.767491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.034 [2024-11-19 09:49:43.767504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.034 [2024-11-19 09:49:43.767511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.034 [2024-11-19 09:49:43.767517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.034 [2024-11-19 09:49:43.767531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.034 qpair failed and we were unable to recover it. 00:31:57.297 [2024-11-19 09:49:43.777474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.297 [2024-11-19 09:49:43.777514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.297 [2024-11-19 09:49:43.777527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.297 [2024-11-19 09:49:43.777534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.297 [2024-11-19 09:49:43.777540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.297 [2024-11-19 09:49:43.777553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.297 qpair failed and we were unable to recover it. 00:31:57.297 [2024-11-19 09:49:43.787534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.297 [2024-11-19 09:49:43.787584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.297 [2024-11-19 09:49:43.787597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.297 [2024-11-19 09:49:43.787604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.297 [2024-11-19 09:49:43.787610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.297 [2024-11-19 09:49:43.787624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.297 qpair failed and we were unable to recover it. 00:31:57.297 [2024-11-19 09:49:43.797453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.297 [2024-11-19 09:49:43.797499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.297 [2024-11-19 09:49:43.797514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.297 [2024-11-19 09:49:43.797521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.297 [2024-11-19 09:49:43.797527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.297 [2024-11-19 09:49:43.797542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.297 qpair failed and we were unable to recover it. 00:31:57.297 [2024-11-19 09:49:43.807557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.297 [2024-11-19 09:49:43.807601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.297 [2024-11-19 09:49:43.807615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.297 [2024-11-19 09:49:43.807622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.297 [2024-11-19 09:49:43.807628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.297 [2024-11-19 09:49:43.807642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.297 qpair failed and we were unable to recover it. 00:31:57.297 [2024-11-19 09:49:43.817586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.297 [2024-11-19 09:49:43.817640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.297 [2024-11-19 09:49:43.817653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.297 [2024-11-19 09:49:43.817660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.297 [2024-11-19 09:49:43.817666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.297 [2024-11-19 09:49:43.817680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.297 qpair failed and we were unable to recover it. 00:31:57.297 [2024-11-19 09:49:43.827645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.297 [2024-11-19 09:49:43.827694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.297 [2024-11-19 09:49:43.827707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.297 [2024-11-19 09:49:43.827714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.297 [2024-11-19 09:49:43.827720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.297 [2024-11-19 09:49:43.827734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.297 qpair failed and we were unable to recover it. 00:31:57.297 [2024-11-19 09:49:43.837655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.297 [2024-11-19 09:49:43.837699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.297 [2024-11-19 09:49:43.837716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.297 [2024-11-19 09:49:43.837723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.297 [2024-11-19 09:49:43.837729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.297 [2024-11-19 09:49:43.837743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.297 qpair failed and we were unable to recover it. 00:31:57.297 [2024-11-19 09:49:43.847673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.297 [2024-11-19 09:49:43.847730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.297 [2024-11-19 09:49:43.847743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.297 [2024-11-19 09:49:43.847750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.297 [2024-11-19 09:49:43.847757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.297 [2024-11-19 09:49:43.847770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.297 qpair failed and we were unable to recover it. 00:31:57.297 [2024-11-19 09:49:43.857719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.297 [2024-11-19 09:49:43.857763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.297 [2024-11-19 09:49:43.857776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.297 [2024-11-19 09:49:43.857783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.297 [2024-11-19 09:49:43.857789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.297 [2024-11-19 09:49:43.857802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.297 qpair failed and we were unable to recover it. 00:31:57.297 [2024-11-19 09:49:43.867647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.297 [2024-11-19 09:49:43.867694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.297 [2024-11-19 09:49:43.867706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.297 [2024-11-19 09:49:43.867713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.298 [2024-11-19 09:49:43.867719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.298 [2024-11-19 09:49:43.867733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.298 qpair failed and we were unable to recover it. 00:31:57.298 [2024-11-19 09:49:43.877767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.298 [2024-11-19 09:49:43.877811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.298 [2024-11-19 09:49:43.877825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.298 [2024-11-19 09:49:43.877832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.298 [2024-11-19 09:49:43.877841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.298 [2024-11-19 09:49:43.877856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.298 qpair failed and we were unable to recover it. 00:31:57.298 [2024-11-19 09:49:43.887808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.298 [2024-11-19 09:49:43.887862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.298 [2024-11-19 09:49:43.887875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.298 [2024-11-19 09:49:43.887882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.298 [2024-11-19 09:49:43.887888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.298 [2024-11-19 09:49:43.887901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.298 qpair failed and we were unable to recover it. 00:31:57.298 [2024-11-19 09:49:43.897789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.298 [2024-11-19 09:49:43.897831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.298 [2024-11-19 09:49:43.897845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.298 [2024-11-19 09:49:43.897852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.298 [2024-11-19 09:49:43.897858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.298 [2024-11-19 09:49:43.897872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.298 qpair failed and we were unable to recover it. 00:31:57.298 [2024-11-19 09:49:43.907750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.298 [2024-11-19 09:49:43.907816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.298 [2024-11-19 09:49:43.907828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.298 [2024-11-19 09:49:43.907835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.298 [2024-11-19 09:49:43.907841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.298 [2024-11-19 09:49:43.907855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.298 qpair failed and we were unable to recover it. 00:31:57.298 [2024-11-19 09:49:43.917868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.298 [2024-11-19 09:49:43.917916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.298 [2024-11-19 09:49:43.917929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.298 [2024-11-19 09:49:43.917936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.298 [2024-11-19 09:49:43.917942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.298 [2024-11-19 09:49:43.917956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.298 qpair failed and we were unable to recover it. 00:31:57.298 [2024-11-19 09:49:43.927893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.298 [2024-11-19 09:49:43.927953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.298 [2024-11-19 09:49:43.927967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.298 [2024-11-19 09:49:43.927974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.298 [2024-11-19 09:49:43.927980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.298 [2024-11-19 09:49:43.927993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.298 qpair failed and we were unable to recover it. 00:31:57.298 [2024-11-19 09:49:43.937890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.298 [2024-11-19 09:49:43.937936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.298 [2024-11-19 09:49:43.937949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.298 [2024-11-19 09:49:43.937955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.298 [2024-11-19 09:49:43.937962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.298 [2024-11-19 09:49:43.937975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.298 qpair failed and we were unable to recover it. 00:31:57.298 [2024-11-19 09:49:43.947941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.298 [2024-11-19 09:49:43.947992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.298 [2024-11-19 09:49:43.948005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.298 [2024-11-19 09:49:43.948011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.298 [2024-11-19 09:49:43.948018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.298 [2024-11-19 09:49:43.948031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.298 qpair failed and we were unable to recover it. 00:31:57.298 [2024-11-19 09:49:43.957945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.298 [2024-11-19 09:49:43.957989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.298 [2024-11-19 09:49:43.958004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.298 [2024-11-19 09:49:43.958011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.298 [2024-11-19 09:49:43.958017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.298 [2024-11-19 09:49:43.958031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.298 qpair failed and we were unable to recover it. 00:31:57.298 [2024-11-19 09:49:43.968006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.298 [2024-11-19 09:49:43.968055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.298 [2024-11-19 09:49:43.968071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.298 [2024-11-19 09:49:43.968078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.298 [2024-11-19 09:49:43.968084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.298 [2024-11-19 09:49:43.968098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.298 qpair failed and we were unable to recover it. 00:31:57.298 [2024-11-19 09:49:43.978017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.298 [2024-11-19 09:49:43.978063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.298 [2024-11-19 09:49:43.978075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.298 [2024-11-19 09:49:43.978082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.298 [2024-11-19 09:49:43.978089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.298 [2024-11-19 09:49:43.978102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.298 qpair failed and we were unable to recover it. 00:31:57.298 [2024-11-19 09:49:43.988084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.298 [2024-11-19 09:49:43.988135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.298 [2024-11-19 09:49:43.988148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.298 [2024-11-19 09:49:43.988155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.298 [2024-11-19 09:49:43.988167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.298 [2024-11-19 09:49:43.988180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.298 qpair failed and we were unable to recover it. 00:31:57.298 [2024-11-19 09:49:43.998048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.298 [2024-11-19 09:49:43.998099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.298 [2024-11-19 09:49:43.998112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.298 [2024-11-19 09:49:43.998119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.299 [2024-11-19 09:49:43.998125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.299 [2024-11-19 09:49:43.998138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.299 qpair failed and we were unable to recover it. 00:31:57.299 [2024-11-19 09:49:44.008114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.299 [2024-11-19 09:49:44.008168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.299 [2024-11-19 09:49:44.008182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.299 [2024-11-19 09:49:44.008189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.299 [2024-11-19 09:49:44.008202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.299 [2024-11-19 09:49:44.008216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.299 qpair failed and we were unable to recover it. 00:31:57.299 [2024-11-19 09:49:44.018143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.299 [2024-11-19 09:49:44.018191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.299 [2024-11-19 09:49:44.018207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.299 [2024-11-19 09:49:44.018215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.299 [2024-11-19 09:49:44.018221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb430c0 00:31:57.299 [2024-11-19 09:49:44.018237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:57.299 qpair failed and we were unable to recover it. 00:31:57.299 [2024-11-19 09:49:44.028221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.299 [2024-11-19 09:49:44.028339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.299 [2024-11-19 09:49:44.028411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.299 [2024-11-19 09:49:44.028448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.299 [2024-11-19 09:49:44.028480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.299 [2024-11-19 09:49:44.028550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.299 qpair failed and we were unable to recover it. 00:31:57.299 [2024-11-19 09:49:44.038196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.299 [2024-11-19 09:49:44.038317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.299 [2024-11-19 09:49:44.038370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.299 [2024-11-19 09:49:44.038396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.299 [2024-11-19 09:49:44.038419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.299 [2024-11-19 09:49:44.038465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.299 qpair failed and we were unable to recover it. 00:31:57.561 [2024-11-19 09:49:44.048116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.561 [2024-11-19 09:49:44.048207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.561 [2024-11-19 09:49:44.048244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.561 [2024-11-19 09:49:44.048267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.561 [2024-11-19 09:49:44.048289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.561 [2024-11-19 09:49:44.048334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.561 qpair failed and we were unable to recover it. 00:31:57.561 [2024-11-19 09:49:44.058243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.561 [2024-11-19 09:49:44.058345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.561 [2024-11-19 09:49:44.058366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.561 [2024-11-19 09:49:44.058381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.561 [2024-11-19 09:49:44.058397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.561 [2024-11-19 09:49:44.058429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.561 qpair failed and we were unable to recover it. 00:31:57.561 [2024-11-19 09:49:44.068322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.561 [2024-11-19 09:49:44.068380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.561 [2024-11-19 09:49:44.068399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.561 [2024-11-19 09:49:44.068412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.561 [2024-11-19 09:49:44.068423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.561 [2024-11-19 09:49:44.068448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.561 qpair failed and we were unable to recover it. 00:31:57.561 [2024-11-19 09:49:44.078309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.561 [2024-11-19 09:49:44.078362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.561 [2024-11-19 09:49:44.078384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.561 [2024-11-19 09:49:44.078396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.561 [2024-11-19 09:49:44.078407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.561 [2024-11-19 09:49:44.078431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.561 qpair failed and we were unable to recover it. 00:31:57.561 [2024-11-19 09:49:44.088235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.561 [2024-11-19 09:49:44.088286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.561 [2024-11-19 09:49:44.088306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.561 [2024-11-19 09:49:44.088318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.561 [2024-11-19 09:49:44.088330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.561 [2024-11-19 09:49:44.088354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.561 qpair failed and we were unable to recover it. 00:31:57.561 [2024-11-19 09:49:44.098373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.561 [2024-11-19 09:49:44.098428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.561 [2024-11-19 09:49:44.098452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.561 [2024-11-19 09:49:44.098464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.561 [2024-11-19 09:49:44.098475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.561 [2024-11-19 09:49:44.098498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.561 qpair failed and we were unable to recover it. 00:31:57.561 [2024-11-19 09:49:44.108438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.561 [2024-11-19 09:49:44.108493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.561 [2024-11-19 09:49:44.108513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.562 [2024-11-19 09:49:44.108525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.562 [2024-11-19 09:49:44.108536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.562 [2024-11-19 09:49:44.108559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.562 qpair failed and we were unable to recover it. 00:31:57.562 [2024-11-19 09:49:44.118424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.562 [2024-11-19 09:49:44.118487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.562 [2024-11-19 09:49:44.118507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.562 [2024-11-19 09:49:44.118519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.562 [2024-11-19 09:49:44.118530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.562 [2024-11-19 09:49:44.118552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.562 qpair failed and we were unable to recover it. 00:31:57.562 [2024-11-19 09:49:44.128477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.562 [2024-11-19 09:49:44.128535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.562 [2024-11-19 09:49:44.128554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.562 [2024-11-19 09:49:44.128566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.562 [2024-11-19 09:49:44.128577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.562 [2024-11-19 09:49:44.128600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.562 qpair failed and we were unable to recover it. 00:31:57.562 [2024-11-19 09:49:44.138479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.562 [2024-11-19 09:49:44.138531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.562 [2024-11-19 09:49:44.138552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.562 [2024-11-19 09:49:44.138564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.562 [2024-11-19 09:49:44.138579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.562 [2024-11-19 09:49:44.138602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.562 qpair failed and we were unable to recover it. 00:31:57.562 [2024-11-19 09:49:44.148559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.562 [2024-11-19 09:49:44.148646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.562 [2024-11-19 09:49:44.148662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.562 [2024-11-19 09:49:44.148674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.562 [2024-11-19 09:49:44.148685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.562 [2024-11-19 09:49:44.148708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.562 qpair failed and we were unable to recover it. 00:31:57.562 [2024-11-19 09:49:44.158468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.562 [2024-11-19 09:49:44.158518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.562 [2024-11-19 09:49:44.158538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.562 [2024-11-19 09:49:44.158550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.562 [2024-11-19 09:49:44.158561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.562 [2024-11-19 09:49:44.158584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.562 qpair failed and we were unable to recover it. 00:31:57.562 [2024-11-19 09:49:44.168547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.562 [2024-11-19 09:49:44.168642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.562 [2024-11-19 09:49:44.168657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.562 [2024-11-19 09:49:44.168668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.562 [2024-11-19 09:49:44.168679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.562 [2024-11-19 09:49:44.168702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.562 qpair failed and we were unable to recover it. 00:31:57.562 [2024-11-19 09:49:44.178583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.562 [2024-11-19 09:49:44.178646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.562 [2024-11-19 09:49:44.178665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.562 [2024-11-19 09:49:44.178677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.562 [2024-11-19 09:49:44.178688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.562 [2024-11-19 09:49:44.178711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.562 qpair failed and we were unable to recover it. 00:31:57.562 [2024-11-19 09:49:44.188646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.562 [2024-11-19 09:49:44.188706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.562 [2024-11-19 09:49:44.188725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.562 [2024-11-19 09:49:44.188737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.562 [2024-11-19 09:49:44.188748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.562 [2024-11-19 09:49:44.188771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.562 qpair failed and we were unable to recover it. 00:31:57.562 [2024-11-19 09:49:44.198633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.562 [2024-11-19 09:49:44.198686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.562 [2024-11-19 09:49:44.198708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.562 [2024-11-19 09:49:44.198720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.562 [2024-11-19 09:49:44.198731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.562 [2024-11-19 09:49:44.198753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.562 qpair failed and we were unable to recover it. 00:31:57.562 [2024-11-19 09:49:44.208665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.562 [2024-11-19 09:49:44.208718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.562 [2024-11-19 09:49:44.208738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.562 [2024-11-19 09:49:44.208750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.562 [2024-11-19 09:49:44.208761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.562 [2024-11-19 09:49:44.208784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.562 qpair failed and we were unable to recover it. 00:31:57.562 [2024-11-19 09:49:44.218732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.563 [2024-11-19 09:49:44.218795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.563 [2024-11-19 09:49:44.218814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.563 [2024-11-19 09:49:44.218825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.563 [2024-11-19 09:49:44.218836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.563 [2024-11-19 09:49:44.218859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.563 qpair failed and we were unable to recover it. 00:31:57.563 [2024-11-19 09:49:44.228710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.563 [2024-11-19 09:49:44.228766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.563 [2024-11-19 09:49:44.228789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.563 [2024-11-19 09:49:44.228801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.563 [2024-11-19 09:49:44.228812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.563 [2024-11-19 09:49:44.228835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.563 qpair failed and we were unable to recover it. 00:31:57.563 [2024-11-19 09:49:44.238740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.563 [2024-11-19 09:49:44.238791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.563 [2024-11-19 09:49:44.238812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.563 [2024-11-19 09:49:44.238824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.563 [2024-11-19 09:49:44.238835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.563 [2024-11-19 09:49:44.238857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.563 qpair failed and we were unable to recover it. 00:31:57.563 [2024-11-19 09:49:44.248755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.563 [2024-11-19 09:49:44.248811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.563 [2024-11-19 09:49:44.248831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.563 [2024-11-19 09:49:44.248843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.563 [2024-11-19 09:49:44.248854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.563 [2024-11-19 09:49:44.248877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.563 qpair failed and we were unable to recover it. 00:31:57.563 [2024-11-19 09:49:44.258763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.563 [2024-11-19 09:49:44.258813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.563 [2024-11-19 09:49:44.258833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.563 [2024-11-19 09:49:44.258845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.563 [2024-11-19 09:49:44.258855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.563 [2024-11-19 09:49:44.258879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.563 qpair failed and we were unable to recover it. 00:31:57.563 [2024-11-19 09:49:44.268846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.563 [2024-11-19 09:49:44.268903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.563 [2024-11-19 09:49:44.268931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.563 [2024-11-19 09:49:44.268950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.563 [2024-11-19 09:49:44.268961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.563 [2024-11-19 09:49:44.268990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.563 qpair failed and we were unable to recover it. 00:31:57.563 [2024-11-19 09:49:44.278849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.563 [2024-11-19 09:49:44.278918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.563 [2024-11-19 09:49:44.278941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.563 [2024-11-19 09:49:44.278953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.563 [2024-11-19 09:49:44.278964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.563 [2024-11-19 09:49:44.278989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.563 qpair failed and we were unable to recover it. 00:31:57.563 [2024-11-19 09:49:44.288850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.563 [2024-11-19 09:49:44.288906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.563 [2024-11-19 09:49:44.288928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.563 [2024-11-19 09:49:44.288941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.563 [2024-11-19 09:49:44.288952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.563 [2024-11-19 09:49:44.288976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.563 qpair failed and we were unable to recover it. 00:31:57.563 [2024-11-19 09:49:44.298909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.563 [2024-11-19 09:49:44.298996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.563 [2024-11-19 09:49:44.299012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.563 [2024-11-19 09:49:44.299023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.563 [2024-11-19 09:49:44.299034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.563 [2024-11-19 09:49:44.299058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.563 qpair failed and we were unable to recover it. 00:31:57.826 [2024-11-19 09:49:44.308970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.826 [2024-11-19 09:49:44.309026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.826 [2024-11-19 09:49:44.309048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.826 [2024-11-19 09:49:44.309061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.826 [2024-11-19 09:49:44.309072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.826 [2024-11-19 09:49:44.309100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.826 qpair failed and we were unable to recover it. 00:31:57.826 [2024-11-19 09:49:44.318993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.826 [2024-11-19 09:49:44.319078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.826 [2024-11-19 09:49:44.319094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.826 [2024-11-19 09:49:44.319105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.826 [2024-11-19 09:49:44.319116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.826 [2024-11-19 09:49:44.319140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.826 qpair failed and we were unable to recover it. 00:31:57.826 [2024-11-19 09:49:44.329008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.826 [2024-11-19 09:49:44.329064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.826 [2024-11-19 09:49:44.329084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.826 [2024-11-19 09:49:44.329097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.826 [2024-11-19 09:49:44.329108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.826 [2024-11-19 09:49:44.329131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.826 qpair failed and we were unable to recover it. 00:31:57.826 [2024-11-19 09:49:44.338882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.826 [2024-11-19 09:49:44.338930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.826 [2024-11-19 09:49:44.338950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.826 [2024-11-19 09:49:44.338963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.826 [2024-11-19 09:49:44.338974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.826 [2024-11-19 09:49:44.338998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.826 qpair failed and we were unable to recover it. 00:31:57.826 [2024-11-19 09:49:44.349067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.826 [2024-11-19 09:49:44.349120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.826 [2024-11-19 09:49:44.349140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.826 [2024-11-19 09:49:44.349152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.826 [2024-11-19 09:49:44.349170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.826 [2024-11-19 09:49:44.349193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.826 qpair failed and we were unable to recover it. 00:31:57.826 [2024-11-19 09:49:44.359056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.826 [2024-11-19 09:49:44.359112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.826 [2024-11-19 09:49:44.359134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.826 [2024-11-19 09:49:44.359147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.826 [2024-11-19 09:49:44.359163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.826 [2024-11-19 09:49:44.359187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.826 qpair failed and we were unable to recover it. 00:31:57.826 [2024-11-19 09:49:44.368967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.826 [2024-11-19 09:49:44.369023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.826 [2024-11-19 09:49:44.369043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.826 [2024-11-19 09:49:44.369055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.826 [2024-11-19 09:49:44.369066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.826 [2024-11-19 09:49:44.369089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.826 qpair failed and we were unable to recover it. 00:31:57.826 [2024-11-19 09:49:44.379128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.826 [2024-11-19 09:49:44.379185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.826 [2024-11-19 09:49:44.379206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.826 [2024-11-19 09:49:44.379218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.826 [2024-11-19 09:49:44.379229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.826 [2024-11-19 09:49:44.379252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.826 qpair failed and we were unable to recover it. 00:31:57.826 [2024-11-19 09:49:44.389187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.826 [2024-11-19 09:49:44.389247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.826 [2024-11-19 09:49:44.389268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.826 [2024-11-19 09:49:44.389281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.826 [2024-11-19 09:49:44.389292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.826 [2024-11-19 09:49:44.389316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.826 qpair failed and we were unable to recover it. 00:31:57.826 [2024-11-19 09:49:44.399186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.826 [2024-11-19 09:49:44.399277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.826 [2024-11-19 09:49:44.399293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.826 [2024-11-19 09:49:44.399309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.826 [2024-11-19 09:49:44.399320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.826 [2024-11-19 09:49:44.399344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.826 qpair failed and we were unable to recover it. 00:31:57.826 [2024-11-19 09:49:44.409219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.826 [2024-11-19 09:49:44.409274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.826 [2024-11-19 09:49:44.409295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.827 [2024-11-19 09:49:44.409308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.827 [2024-11-19 09:49:44.409319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.827 [2024-11-19 09:49:44.409342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.827 qpair failed and we were unable to recover it. 00:31:57.827 [2024-11-19 09:49:44.419267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.827 [2024-11-19 09:49:44.419328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.827 [2024-11-19 09:49:44.419347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.827 [2024-11-19 09:49:44.419359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.827 [2024-11-19 09:49:44.419371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.827 [2024-11-19 09:49:44.419394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.827 qpair failed and we were unable to recover it. 00:31:57.827 [2024-11-19 09:49:44.429287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.827 [2024-11-19 09:49:44.429340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.827 [2024-11-19 09:49:44.429360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.827 [2024-11-19 09:49:44.429372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.827 [2024-11-19 09:49:44.429384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.827 [2024-11-19 09:49:44.429406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.827 qpair failed and we were unable to recover it. 00:31:57.827 [2024-11-19 09:49:44.439257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.827 [2024-11-19 09:49:44.439314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.827 [2024-11-19 09:49:44.439335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.827 [2024-11-19 09:49:44.439347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.827 [2024-11-19 09:49:44.439358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.827 [2024-11-19 09:49:44.439390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.827 qpair failed and we were unable to recover it. 00:31:57.827 [2024-11-19 09:49:44.449332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.827 [2024-11-19 09:49:44.449405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.827 [2024-11-19 09:49:44.449422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.827 [2024-11-19 09:49:44.449434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.827 [2024-11-19 09:49:44.449445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.827 [2024-11-19 09:49:44.449470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.827 qpair failed and we were unable to recover it. 00:31:57.827 [2024-11-19 09:49:44.459343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.827 [2024-11-19 09:49:44.459390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.827 [2024-11-19 09:49:44.459412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.827 [2024-11-19 09:49:44.459423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.827 [2024-11-19 09:49:44.459434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.827 [2024-11-19 09:49:44.459457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.827 qpair failed and we were unable to recover it. 00:31:57.827 [2024-11-19 09:49:44.469410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.827 [2024-11-19 09:49:44.469472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.827 [2024-11-19 09:49:44.469490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.827 [2024-11-19 09:49:44.469501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.827 [2024-11-19 09:49:44.469512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.827 [2024-11-19 09:49:44.469536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.827 qpair failed and we were unable to recover it. 00:31:57.827 [2024-11-19 09:49:44.479269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.827 [2024-11-19 09:49:44.479324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.827 [2024-11-19 09:49:44.479344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.827 [2024-11-19 09:49:44.479356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.827 [2024-11-19 09:49:44.479367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.827 [2024-11-19 09:49:44.479390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.827 qpair failed and we were unable to recover it. 00:31:57.827 [2024-11-19 09:49:44.489393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.827 [2024-11-19 09:49:44.489471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.827 [2024-11-19 09:49:44.489487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.827 [2024-11-19 09:49:44.489498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.827 [2024-11-19 09:49:44.489510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.827 [2024-11-19 09:49:44.489532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.827 qpair failed and we were unable to recover it. 00:31:57.827 [2024-11-19 09:49:44.499448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.827 [2024-11-19 09:49:44.499495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.827 [2024-11-19 09:49:44.499516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.827 [2024-11-19 09:49:44.499527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.827 [2024-11-19 09:49:44.499538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.827 [2024-11-19 09:49:44.499561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.827 qpair failed and we were unable to recover it. 00:31:57.827 [2024-11-19 09:49:44.509497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.827 [2024-11-19 09:49:44.509549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.827 [2024-11-19 09:49:44.509568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.827 [2024-11-19 09:49:44.509580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.827 [2024-11-19 09:49:44.509590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.827 [2024-11-19 09:49:44.509613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.827 qpair failed and we were unable to recover it. 00:31:57.827 [2024-11-19 09:49:44.519493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.827 [2024-11-19 09:49:44.519544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.827 [2024-11-19 09:49:44.519563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.827 [2024-11-19 09:49:44.519575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.827 [2024-11-19 09:49:44.519586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.827 [2024-11-19 09:49:44.519609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.827 qpair failed and we were unable to recover it. 00:31:57.827 [2024-11-19 09:49:44.529529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.827 [2024-11-19 09:49:44.529587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.827 [2024-11-19 09:49:44.529610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.827 [2024-11-19 09:49:44.529620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.827 [2024-11-19 09:49:44.529631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.827 [2024-11-19 09:49:44.529654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.827 qpair failed and we were unable to recover it. 00:31:57.827 [2024-11-19 09:49:44.539539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.827 [2024-11-19 09:49:44.539598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.827 [2024-11-19 09:49:44.539618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.828 [2024-11-19 09:49:44.539629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.828 [2024-11-19 09:49:44.539640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.828 [2024-11-19 09:49:44.539663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.828 qpair failed and we were unable to recover it. 00:31:57.828 [2024-11-19 09:49:44.549619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.828 [2024-11-19 09:49:44.549677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.828 [2024-11-19 09:49:44.549696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.828 [2024-11-19 09:49:44.549708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.828 [2024-11-19 09:49:44.549719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.828 [2024-11-19 09:49:44.549742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.828 qpair failed and we were unable to recover it. 00:31:57.828 [2024-11-19 09:49:44.559590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.828 [2024-11-19 09:49:44.559650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.828 [2024-11-19 09:49:44.559670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.828 [2024-11-19 09:49:44.559681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.828 [2024-11-19 09:49:44.559692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:57.828 [2024-11-19 09:49:44.559715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:57.828 qpair failed and we were unable to recover it. 00:31:57.828 [2024-11-19 09:49:44.569631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:57.828 [2024-11-19 09:49:44.569688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:57.828 [2024-11-19 09:49:44.569709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:57.828 [2024-11-19 09:49:44.569720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:57.828 [2024-11-19 09:49:44.569735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.089 [2024-11-19 09:49:44.569759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.089 qpair failed and we were unable to recover it. 00:31:58.089 [2024-11-19 09:49:44.579648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.090 [2024-11-19 09:49:44.579708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.090 [2024-11-19 09:49:44.579727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.090 [2024-11-19 09:49:44.579739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.090 [2024-11-19 09:49:44.579749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.090 [2024-11-19 09:49:44.579773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.090 qpair failed and we were unable to recover it. 00:31:58.090 [2024-11-19 09:49:44.589725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.090 [2024-11-19 09:49:44.589776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.090 [2024-11-19 09:49:44.589797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.090 [2024-11-19 09:49:44.589809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.090 [2024-11-19 09:49:44.589820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.090 [2024-11-19 09:49:44.589843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.090 qpair failed and we were unable to recover it. 00:31:58.090 [2024-11-19 09:49:44.599713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.090 [2024-11-19 09:49:44.599762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.090 [2024-11-19 09:49:44.599782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.090 [2024-11-19 09:49:44.599794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.090 [2024-11-19 09:49:44.599805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.090 [2024-11-19 09:49:44.599828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.090 qpair failed and we were unable to recover it. 00:31:58.090 [2024-11-19 09:49:44.609730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.090 [2024-11-19 09:49:44.609784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.090 [2024-11-19 09:49:44.609804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.090 [2024-11-19 09:49:44.609816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.090 [2024-11-19 09:49:44.609828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.090 [2024-11-19 09:49:44.609850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.090 qpair failed and we were unable to recover it. 00:31:58.090 [2024-11-19 09:49:44.619772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.090 [2024-11-19 09:49:44.619830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.090 [2024-11-19 09:49:44.619858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.090 [2024-11-19 09:49:44.619871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.090 [2024-11-19 09:49:44.619883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.090 [2024-11-19 09:49:44.619910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.090 qpair failed and we were unable to recover it. 00:31:58.090 [2024-11-19 09:49:44.629820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.090 [2024-11-19 09:49:44.629883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.090 [2024-11-19 09:49:44.629911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.090 [2024-11-19 09:49:44.629924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.090 [2024-11-19 09:49:44.629936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.090 [2024-11-19 09:49:44.629964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.090 qpair failed and we were unable to recover it. 00:31:58.090 [2024-11-19 09:49:44.639788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.090 [2024-11-19 09:49:44.639894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.090 [2024-11-19 09:49:44.639918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.090 [2024-11-19 09:49:44.639931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.090 [2024-11-19 09:49:44.639942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.090 [2024-11-19 09:49:44.639971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.090 qpair failed and we were unable to recover it. 00:31:58.090 [2024-11-19 09:49:44.649847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.090 [2024-11-19 09:49:44.649903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.090 [2024-11-19 09:49:44.649927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.090 [2024-11-19 09:49:44.649940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.090 [2024-11-19 09:49:44.649951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.090 [2024-11-19 09:49:44.649975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.090 qpair failed and we were unable to recover it. 00:31:58.090 [2024-11-19 09:49:44.659862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.090 [2024-11-19 09:49:44.659908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.090 [2024-11-19 09:49:44.659934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.090 [2024-11-19 09:49:44.659946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.090 [2024-11-19 09:49:44.659957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.090 [2024-11-19 09:49:44.659981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.090 qpair failed and we were unable to recover it. 00:31:58.090 [2024-11-19 09:49:44.669942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.090 [2024-11-19 09:49:44.670003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.090 [2024-11-19 09:49:44.670023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.090 [2024-11-19 09:49:44.670034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.090 [2024-11-19 09:49:44.670045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.090 [2024-11-19 09:49:44.670069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.090 qpair failed and we were unable to recover it. 00:31:58.090 [2024-11-19 09:49:44.679799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.090 [2024-11-19 09:49:44.679861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.090 [2024-11-19 09:49:44.679880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.090 [2024-11-19 09:49:44.679891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.090 [2024-11-19 09:49:44.679902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.090 [2024-11-19 09:49:44.679925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.090 qpair failed and we were unable to recover it. 00:31:58.090 [2024-11-19 09:49:44.689944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.090 [2024-11-19 09:49:44.689993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.090 [2024-11-19 09:49:44.690012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.090 [2024-11-19 09:49:44.690024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.090 [2024-11-19 09:49:44.690035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.090 [2024-11-19 09:49:44.690058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.090 qpair failed and we were unable to recover it. 00:31:58.090 [2024-11-19 09:49:44.699993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.090 [2024-11-19 09:49:44.700051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.090 [2024-11-19 09:49:44.700070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.090 [2024-11-19 09:49:44.700082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.090 [2024-11-19 09:49:44.700097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.091 [2024-11-19 09:49:44.700121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.091 qpair failed and we were unable to recover it. 00:31:58.091 [2024-11-19 09:49:44.710045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.091 [2024-11-19 09:49:44.710103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.091 [2024-11-19 09:49:44.710123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.091 [2024-11-19 09:49:44.710135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.091 [2024-11-19 09:49:44.710146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.091 [2024-11-19 09:49:44.710174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.091 qpair failed and we were unable to recover it. 00:31:58.091 [2024-11-19 09:49:44.719915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.091 [2024-11-19 09:49:44.719970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.091 [2024-11-19 09:49:44.719991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.091 [2024-11-19 09:49:44.720003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.091 [2024-11-19 09:49:44.720014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.091 [2024-11-19 09:49:44.720037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.091 qpair failed and we were unable to recover it. 00:31:58.091 [2024-11-19 09:49:44.730061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.091 [2024-11-19 09:49:44.730124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.091 [2024-11-19 09:49:44.730142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.091 [2024-11-19 09:49:44.730153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.091 [2024-11-19 09:49:44.730169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.091 [2024-11-19 09:49:44.730193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.091 qpair failed and we were unable to recover it. 00:31:58.091 [2024-11-19 09:49:44.740088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.091 [2024-11-19 09:49:44.740141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.091 [2024-11-19 09:49:44.740166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.091 [2024-11-19 09:49:44.740178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.091 [2024-11-19 09:49:44.740190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.091 [2024-11-19 09:49:44.740213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.091 qpair failed and we were unable to recover it. 00:31:58.091 [2024-11-19 09:49:44.750193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.091 [2024-11-19 09:49:44.750302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.091 [2024-11-19 09:49:44.750318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.091 [2024-11-19 09:49:44.750329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.091 [2024-11-19 09:49:44.750340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.091 [2024-11-19 09:49:44.750362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.091 qpair failed and we were unable to recover it. 00:31:58.091 [2024-11-19 09:49:44.760027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.091 [2024-11-19 09:49:44.760078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.091 [2024-11-19 09:49:44.760098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.091 [2024-11-19 09:49:44.760109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.091 [2024-11-19 09:49:44.760120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.091 [2024-11-19 09:49:44.760150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.091 qpair failed and we were unable to recover it. 00:31:58.091 [2024-11-19 09:49:44.770186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.091 [2024-11-19 09:49:44.770238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.091 [2024-11-19 09:49:44.770261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.091 [2024-11-19 09:49:44.770273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.091 [2024-11-19 09:49:44.770284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.091 [2024-11-19 09:49:44.770308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.091 qpair failed and we were unable to recover it. 00:31:58.091 [2024-11-19 09:49:44.780187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.091 [2024-11-19 09:49:44.780247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.091 [2024-11-19 09:49:44.780266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.091 [2024-11-19 09:49:44.780277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.091 [2024-11-19 09:49:44.780288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.091 [2024-11-19 09:49:44.780312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.091 qpair failed and we were unable to recover it. 00:31:58.091 [2024-11-19 09:49:44.790298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.091 [2024-11-19 09:49:44.790358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.091 [2024-11-19 09:49:44.790378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.091 [2024-11-19 09:49:44.790389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.091 [2024-11-19 09:49:44.790400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.091 [2024-11-19 09:49:44.790423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.091 qpair failed and we were unable to recover it. 00:31:58.091 [2024-11-19 09:49:44.800248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.091 [2024-11-19 09:49:44.800314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.091 [2024-11-19 09:49:44.800334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.091 [2024-11-19 09:49:44.800345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.091 [2024-11-19 09:49:44.800356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.091 [2024-11-19 09:49:44.800379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.091 qpair failed and we were unable to recover it. 00:31:58.091 [2024-11-19 09:49:44.810277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.091 [2024-11-19 09:49:44.810337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.091 [2024-11-19 09:49:44.810356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.091 [2024-11-19 09:49:44.810368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.091 [2024-11-19 09:49:44.810379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.091 [2024-11-19 09:49:44.810402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.091 qpair failed and we were unable to recover it. 00:31:58.091 [2024-11-19 09:49:44.820292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.091 [2024-11-19 09:49:44.820342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.091 [2024-11-19 09:49:44.820362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.091 [2024-11-19 09:49:44.820375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.091 [2024-11-19 09:49:44.820386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.091 [2024-11-19 09:49:44.820409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.091 qpair failed and we were unable to recover it. 00:31:58.091 [2024-11-19 09:49:44.830367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.091 [2024-11-19 09:49:44.830423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.091 [2024-11-19 09:49:44.830443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.091 [2024-11-19 09:49:44.830459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.091 [2024-11-19 09:49:44.830471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.092 [2024-11-19 09:49:44.830494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.092 qpair failed and we were unable to recover it. 00:31:58.353 [2024-11-19 09:49:44.840370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.353 [2024-11-19 09:49:44.840424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.353 [2024-11-19 09:49:44.840444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.353 [2024-11-19 09:49:44.840456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.353 [2024-11-19 09:49:44.840468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.353 [2024-11-19 09:49:44.840492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.353 qpair failed and we were unable to recover it. 00:31:58.353 [2024-11-19 09:49:44.850390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.353 [2024-11-19 09:49:44.850446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.353 [2024-11-19 09:49:44.850466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.353 [2024-11-19 09:49:44.850477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.353 [2024-11-19 09:49:44.850488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.353 [2024-11-19 09:49:44.850511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.353 qpair failed and we were unable to recover it. 00:31:58.353 [2024-11-19 09:49:44.860286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.353 [2024-11-19 09:49:44.860333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.353 [2024-11-19 09:49:44.860356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.353 [2024-11-19 09:49:44.860368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.353 [2024-11-19 09:49:44.860379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.353 [2024-11-19 09:49:44.860411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.353 qpair failed and we were unable to recover it. 00:31:58.353 [2024-11-19 09:49:44.870531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.353 [2024-11-19 09:49:44.870608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.353 [2024-11-19 09:49:44.870624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.353 [2024-11-19 09:49:44.870635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.353 [2024-11-19 09:49:44.870646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.353 [2024-11-19 09:49:44.870673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.353 qpair failed and we were unable to recover it. 00:31:58.353 [2024-11-19 09:49:44.880470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.353 [2024-11-19 09:49:44.880523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.353 [2024-11-19 09:49:44.880544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.353 [2024-11-19 09:49:44.880556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.353 [2024-11-19 09:49:44.880567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.353 [2024-11-19 09:49:44.880591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.353 qpair failed and we were unable to recover it. 00:31:58.353 [2024-11-19 09:49:44.890469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.353 [2024-11-19 09:49:44.890553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.353 [2024-11-19 09:49:44.890569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.353 [2024-11-19 09:49:44.890580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.353 [2024-11-19 09:49:44.890591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.353 [2024-11-19 09:49:44.890614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.353 qpair failed and we were unable to recover it. 00:31:58.353 [2024-11-19 09:49:44.900482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.353 [2024-11-19 09:49:44.900569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.353 [2024-11-19 09:49:44.900585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.353 [2024-11-19 09:49:44.900596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.353 [2024-11-19 09:49:44.900608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.353 [2024-11-19 09:49:44.900631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.353 qpair failed and we were unable to recover it. 00:31:58.353 [2024-11-19 09:49:44.910490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.353 [2024-11-19 09:49:44.910550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.353 [2024-11-19 09:49:44.910568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.354 [2024-11-19 09:49:44.910579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.354 [2024-11-19 09:49:44.910591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.354 [2024-11-19 09:49:44.910615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.354 qpair failed and we were unable to recover it. 00:31:58.354 [2024-11-19 09:49:44.920597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.354 [2024-11-19 09:49:44.920651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.354 [2024-11-19 09:49:44.920674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.354 [2024-11-19 09:49:44.920686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.354 [2024-11-19 09:49:44.920697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.354 [2024-11-19 09:49:44.920720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.354 qpair failed and we were unable to recover it. 00:31:58.354 [2024-11-19 09:49:44.930624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.354 [2024-11-19 09:49:44.930673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.354 [2024-11-19 09:49:44.930696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.354 [2024-11-19 09:49:44.930708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.354 [2024-11-19 09:49:44.930719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.354 [2024-11-19 09:49:44.930742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.354 qpair failed and we were unable to recover it. 00:31:58.354 [2024-11-19 09:49:44.940629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.354 [2024-11-19 09:49:44.940681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.354 [2024-11-19 09:49:44.940702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.354 [2024-11-19 09:49:44.940714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.354 [2024-11-19 09:49:44.940724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.354 [2024-11-19 09:49:44.940747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.354 qpair failed and we were unable to recover it. 00:31:58.354 [2024-11-19 09:49:44.950689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.354 [2024-11-19 09:49:44.950739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.354 [2024-11-19 09:49:44.950760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.354 [2024-11-19 09:49:44.950772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.354 [2024-11-19 09:49:44.950783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.354 [2024-11-19 09:49:44.950806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.354 qpair failed and we were unable to recover it. 00:31:58.354 [2024-11-19 09:49:44.960684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.354 [2024-11-19 09:49:44.960744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.354 [2024-11-19 09:49:44.960763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.354 [2024-11-19 09:49:44.960778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.354 [2024-11-19 09:49:44.960789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.354 [2024-11-19 09:49:44.960813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.354 qpair failed and we were unable to recover it. 00:31:58.354 [2024-11-19 09:49:44.970728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.354 [2024-11-19 09:49:44.970787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.354 [2024-11-19 09:49:44.970806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.354 [2024-11-19 09:49:44.970818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.354 [2024-11-19 09:49:44.970829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.354 [2024-11-19 09:49:44.970852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.354 qpair failed and we were unable to recover it. 00:31:58.354 [2024-11-19 09:49:44.980719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.354 [2024-11-19 09:49:44.980781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.354 [2024-11-19 09:49:44.980800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.354 [2024-11-19 09:49:44.980811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.354 [2024-11-19 09:49:44.980822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.354 [2024-11-19 09:49:44.980845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.354 qpair failed and we were unable to recover it. 00:31:58.354 [2024-11-19 09:49:44.990790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.354 [2024-11-19 09:49:44.990846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.354 [2024-11-19 09:49:44.990866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.354 [2024-11-19 09:49:44.990878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.354 [2024-11-19 09:49:44.990889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.354 [2024-11-19 09:49:44.990912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.354 qpair failed and we were unable to recover it. 00:31:58.354 [2024-11-19 09:49:45.000787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.354 [2024-11-19 09:49:45.000840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.354 [2024-11-19 09:49:45.000862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.354 [2024-11-19 09:49:45.000874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.354 [2024-11-19 09:49:45.000885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.354 [2024-11-19 09:49:45.000913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.354 qpair failed and we were unable to recover it. 00:31:58.354 [2024-11-19 09:49:45.010814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.354 [2024-11-19 09:49:45.010874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.354 [2024-11-19 09:49:45.010902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.354 [2024-11-19 09:49:45.010916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.354 [2024-11-19 09:49:45.010926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.354 [2024-11-19 09:49:45.010954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.354 qpair failed and we were unable to recover it. 00:31:58.354 [2024-11-19 09:49:45.020841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.354 [2024-11-19 09:49:45.020909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.354 [2024-11-19 09:49:45.020938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.354 [2024-11-19 09:49:45.020952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.354 [2024-11-19 09:49:45.020964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.354 [2024-11-19 09:49:45.020991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.354 qpair failed and we were unable to recover it. 00:31:58.354 [2024-11-19 09:49:45.030821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.354 [2024-11-19 09:49:45.030875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.354 [2024-11-19 09:49:45.030898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.354 [2024-11-19 09:49:45.030911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.354 [2024-11-19 09:49:45.030921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.354 [2024-11-19 09:49:45.030946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.354 qpair failed and we were unable to recover it. 00:31:58.354 [2024-11-19 09:49:45.040764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.354 [2024-11-19 09:49:45.040813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.354 [2024-11-19 09:49:45.040834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.355 [2024-11-19 09:49:45.040847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.355 [2024-11-19 09:49:45.040858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.355 [2024-11-19 09:49:45.040882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.355 qpair failed and we were unable to recover it. 00:31:58.355 [2024-11-19 09:49:45.050925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.355 [2024-11-19 09:49:45.050976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.355 [2024-11-19 09:49:45.050997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.355 [2024-11-19 09:49:45.051009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.355 [2024-11-19 09:49:45.051020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.355 [2024-11-19 09:49:45.051043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.355 qpair failed and we were unable to recover it. 00:31:58.355 [2024-11-19 09:49:45.060940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.355 [2024-11-19 09:49:45.060991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.355 [2024-11-19 09:49:45.061012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.355 [2024-11-19 09:49:45.061024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.355 [2024-11-19 09:49:45.061035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.355 [2024-11-19 09:49:45.061058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.355 qpair failed and we were unable to recover it. 00:31:58.355 [2024-11-19 09:49:45.071008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.355 [2024-11-19 09:49:45.071061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.355 [2024-11-19 09:49:45.071080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.355 [2024-11-19 09:49:45.071093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.355 [2024-11-19 09:49:45.071104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.355 [2024-11-19 09:49:45.071127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.355 qpair failed and we were unable to recover it. 00:31:58.355 [2024-11-19 09:49:45.080957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.355 [2024-11-19 09:49:45.081006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.355 [2024-11-19 09:49:45.081028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.355 [2024-11-19 09:49:45.081040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.355 [2024-11-19 09:49:45.081052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.355 [2024-11-19 09:49:45.081075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.355 qpair failed and we were unable to recover it. 00:31:58.355 [2024-11-19 09:49:45.091027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.355 [2024-11-19 09:49:45.091082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.355 [2024-11-19 09:49:45.091107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.355 [2024-11-19 09:49:45.091119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.355 [2024-11-19 09:49:45.091130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.355 [2024-11-19 09:49:45.091153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.355 qpair failed and we were unable to recover it. 00:31:58.616 [2024-11-19 09:49:45.100943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.616 [2024-11-19 09:49:45.101010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.616 [2024-11-19 09:49:45.101029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.616 [2024-11-19 09:49:45.101040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.616 [2024-11-19 09:49:45.101051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.616 [2024-11-19 09:49:45.101075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.616 qpair failed and we were unable to recover it. 00:31:58.616 [2024-11-19 09:49:45.111110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.616 [2024-11-19 09:49:45.111171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.616 [2024-11-19 09:49:45.111194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.616 [2024-11-19 09:49:45.111205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.616 [2024-11-19 09:49:45.111216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.616 [2024-11-19 09:49:45.111241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.616 qpair failed and we were unable to recover it. 00:31:58.616 [2024-11-19 09:49:45.121050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.616 [2024-11-19 09:49:45.121103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.616 [2024-11-19 09:49:45.121123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.616 [2024-11-19 09:49:45.121135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.616 [2024-11-19 09:49:45.121146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.616 [2024-11-19 09:49:45.121174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.616 qpair failed and we were unable to recover it. 00:31:58.616 [2024-11-19 09:49:45.131012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.616 [2024-11-19 09:49:45.131065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.616 [2024-11-19 09:49:45.131087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.616 [2024-11-19 09:49:45.131099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.616 [2024-11-19 09:49:45.131115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.616 [2024-11-19 09:49:45.131138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.616 qpair failed and we were unable to recover it. 00:31:58.616 [2024-11-19 09:49:45.141141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.616 [2024-11-19 09:49:45.141192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.616 [2024-11-19 09:49:45.141212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.616 [2024-11-19 09:49:45.141224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.616 [2024-11-19 09:49:45.141235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.616 [2024-11-19 09:49:45.141258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.616 qpair failed and we were unable to recover it. 00:31:58.616 [2024-11-19 09:49:45.151056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.616 [2024-11-19 09:49:45.151110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.616 [2024-11-19 09:49:45.151131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.616 [2024-11-19 09:49:45.151143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.616 [2024-11-19 09:49:45.151154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.616 [2024-11-19 09:49:45.151188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.616 qpair failed and we were unable to recover it. 00:31:58.616 [2024-11-19 09:49:45.161199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.616 [2024-11-19 09:49:45.161255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.616 [2024-11-19 09:49:45.161275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.616 [2024-11-19 09:49:45.161287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.616 [2024-11-19 09:49:45.161298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.616 [2024-11-19 09:49:45.161322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.616 qpair failed and we were unable to recover it. 00:31:58.616 [2024-11-19 09:49:45.171251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.616 [2024-11-19 09:49:45.171305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.616 [2024-11-19 09:49:45.171326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.616 [2024-11-19 09:49:45.171338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.616 [2024-11-19 09:49:45.171349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.616 [2024-11-19 09:49:45.171372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.616 qpair failed and we were unable to recover it. 00:31:58.616 [2024-11-19 09:49:45.181259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.616 [2024-11-19 09:49:45.181306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.616 [2024-11-19 09:49:45.181327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.616 [2024-11-19 09:49:45.181339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.616 [2024-11-19 09:49:45.181349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.616 [2024-11-19 09:49:45.181372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.616 qpair failed and we were unable to recover it. 00:31:58.616 [2024-11-19 09:49:45.191265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.616 [2024-11-19 09:49:45.191367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.616 [2024-11-19 09:49:45.191383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.616 [2024-11-19 09:49:45.191394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.616 [2024-11-19 09:49:45.191405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.616 [2024-11-19 09:49:45.191429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.616 qpair failed and we were unable to recover it. 00:31:58.616 [2024-11-19 09:49:45.201229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.616 [2024-11-19 09:49:45.201293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.616 [2024-11-19 09:49:45.201311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.616 [2024-11-19 09:49:45.201323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.616 [2024-11-19 09:49:45.201335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.616 [2024-11-19 09:49:45.201358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.616 qpair failed and we were unable to recover it. 00:31:58.616 [2024-11-19 09:49:45.211346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.616 [2024-11-19 09:49:45.211445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.616 [2024-11-19 09:49:45.211461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.616 [2024-11-19 09:49:45.211472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.616 [2024-11-19 09:49:45.211484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.616 [2024-11-19 09:49:45.211507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.616 qpair failed and we were unable to recover it. 00:31:58.616 [2024-11-19 09:49:45.221349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.616 [2024-11-19 09:49:45.221399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.616 [2024-11-19 09:49:45.221423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.616 [2024-11-19 09:49:45.221435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.616 [2024-11-19 09:49:45.221446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.617 [2024-11-19 09:49:45.221469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.617 qpair failed and we were unable to recover it. 00:31:58.617 [2024-11-19 09:49:45.231397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.617 [2024-11-19 09:49:45.231446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.617 [2024-11-19 09:49:45.231467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.617 [2024-11-19 09:49:45.231479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.617 [2024-11-19 09:49:45.231490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.617 [2024-11-19 09:49:45.231514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.617 qpair failed and we were unable to recover it. 00:31:58.617 [2024-11-19 09:49:45.241401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.617 [2024-11-19 09:49:45.241452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.617 [2024-11-19 09:49:45.241474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.617 [2024-11-19 09:49:45.241486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.617 [2024-11-19 09:49:45.241497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.617 [2024-11-19 09:49:45.241520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.617 qpair failed and we were unable to recover it. 00:31:58.617 [2024-11-19 09:49:45.251460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.617 [2024-11-19 09:49:45.251557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.617 [2024-11-19 09:49:45.251573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.617 [2024-11-19 09:49:45.251584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.617 [2024-11-19 09:49:45.251595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.617 [2024-11-19 09:49:45.251618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.617 qpair failed and we were unable to recover it. 00:31:58.617 [2024-11-19 09:49:45.261457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.617 [2024-11-19 09:49:45.261510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.617 [2024-11-19 09:49:45.261529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.617 [2024-11-19 09:49:45.261540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.617 [2024-11-19 09:49:45.261556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.617 [2024-11-19 09:49:45.261580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.617 qpair failed and we were unable to recover it. 00:31:58.617 [2024-11-19 09:49:45.271487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.617 [2024-11-19 09:49:45.271542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.617 [2024-11-19 09:49:45.271562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.617 [2024-11-19 09:49:45.271574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.617 [2024-11-19 09:49:45.271585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.617 [2024-11-19 09:49:45.271607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.617 qpair failed and we were unable to recover it. 00:31:58.617 [2024-11-19 09:49:45.281512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.617 [2024-11-19 09:49:45.281573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.617 [2024-11-19 09:49:45.281592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.617 [2024-11-19 09:49:45.281604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.617 [2024-11-19 09:49:45.281615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.617 [2024-11-19 09:49:45.281637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.617 qpair failed and we were unable to recover it. 00:31:58.617 [2024-11-19 09:49:45.291541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.617 [2024-11-19 09:49:45.291590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.617 [2024-11-19 09:49:45.291612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.617 [2024-11-19 09:49:45.291624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.617 [2024-11-19 09:49:45.291635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.617 [2024-11-19 09:49:45.291659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.617 qpair failed and we were unable to recover it. 00:31:58.617 [2024-11-19 09:49:45.301550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.617 [2024-11-19 09:49:45.301607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.617 [2024-11-19 09:49:45.301626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.617 [2024-11-19 09:49:45.301638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.617 [2024-11-19 09:49:45.301649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.617 [2024-11-19 09:49:45.301672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.617 qpair failed and we were unable to recover it. 00:31:58.617 [2024-11-19 09:49:45.311587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.617 [2024-11-19 09:49:45.311638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.617 [2024-11-19 09:49:45.311659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.617 [2024-11-19 09:49:45.311671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.617 [2024-11-19 09:49:45.311682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.617 [2024-11-19 09:49:45.311705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.617 qpair failed and we were unable to recover it. 00:31:58.617 [2024-11-19 09:49:45.321635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.617 [2024-11-19 09:49:45.321693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.617 [2024-11-19 09:49:45.321712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.617 [2024-11-19 09:49:45.321724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.617 [2024-11-19 09:49:45.321735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.617 [2024-11-19 09:49:45.321758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.617 qpair failed and we were unable to recover it. 00:31:58.617 [2024-11-19 09:49:45.331611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.617 [2024-11-19 09:49:45.331665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.617 [2024-11-19 09:49:45.331687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.617 [2024-11-19 09:49:45.331699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.617 [2024-11-19 09:49:45.331710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.617 [2024-11-19 09:49:45.331732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.617 qpair failed and we were unable to recover it. 00:31:58.617 [2024-11-19 09:49:45.341663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.617 [2024-11-19 09:49:45.341713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.617 [2024-11-19 09:49:45.341734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.617 [2024-11-19 09:49:45.341746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.617 [2024-11-19 09:49:45.341757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.617 [2024-11-19 09:49:45.341780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.617 qpair failed and we were unable to recover it. 00:31:58.617 [2024-11-19 09:49:45.351691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.617 [2024-11-19 09:49:45.351744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.617 [2024-11-19 09:49:45.351765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.617 [2024-11-19 09:49:45.351777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.617 [2024-11-19 09:49:45.351788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.617 [2024-11-19 09:49:45.351811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.618 qpair failed and we were unable to recover it. 00:31:58.880 [2024-11-19 09:49:45.361735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.880 [2024-11-19 09:49:45.361786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.880 [2024-11-19 09:49:45.361806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.880 [2024-11-19 09:49:45.361818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.880 [2024-11-19 09:49:45.361829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.880 [2024-11-19 09:49:45.361851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.880 qpair failed and we were unable to recover it. 00:31:58.880 [2024-11-19 09:49:45.371772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.880 [2024-11-19 09:49:45.371826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.880 [2024-11-19 09:49:45.371845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.880 [2024-11-19 09:49:45.371857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.880 [2024-11-19 09:49:45.371868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.880 [2024-11-19 09:49:45.371892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.880 qpair failed and we were unable to recover it. 00:31:58.880 [2024-11-19 09:49:45.381824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.880 [2024-11-19 09:49:45.381922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.880 [2024-11-19 09:49:45.381946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.880 [2024-11-19 09:49:45.381960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.880 [2024-11-19 09:49:45.381971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.880 [2024-11-19 09:49:45.382000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.880 qpair failed and we were unable to recover it. 00:31:58.880 [2024-11-19 09:49:45.391828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.880 [2024-11-19 09:49:45.391883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.880 [2024-11-19 09:49:45.391905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.880 [2024-11-19 09:49:45.391923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.880 [2024-11-19 09:49:45.391935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.880 [2024-11-19 09:49:45.391960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.880 qpair failed and we were unable to recover it. 00:31:58.880 [2024-11-19 09:49:45.401838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.880 [2024-11-19 09:49:45.401892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.880 [2024-11-19 09:49:45.401915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.880 [2024-11-19 09:49:45.401927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.880 [2024-11-19 09:49:45.401938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.880 [2024-11-19 09:49:45.401961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.880 qpair failed and we were unable to recover it. 00:31:58.880 [2024-11-19 09:49:45.411909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.880 [2024-11-19 09:49:45.411997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.880 [2024-11-19 09:49:45.412013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.880 [2024-11-19 09:49:45.412024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.880 [2024-11-19 09:49:45.412034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.880 [2024-11-19 09:49:45.412057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.880 qpair failed and we were unable to recover it. 00:31:58.880 [2024-11-19 09:49:45.421888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.880 [2024-11-19 09:49:45.421940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.880 [2024-11-19 09:49:45.421962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.880 [2024-11-19 09:49:45.421974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.880 [2024-11-19 09:49:45.421987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.880 [2024-11-19 09:49:45.422011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.880 qpair failed and we were unable to recover it. 00:31:58.880 [2024-11-19 09:49:45.431919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.880 [2024-11-19 09:49:45.431978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.880 [2024-11-19 09:49:45.431997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.880 [2024-11-19 09:49:45.432008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.880 [2024-11-19 09:49:45.432019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.880 [2024-11-19 09:49:45.432048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.880 qpair failed and we were unable to recover it. 00:31:58.880 [2024-11-19 09:49:45.441869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.880 [2024-11-19 09:49:45.441923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.880 [2024-11-19 09:49:45.441946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.880 [2024-11-19 09:49:45.441958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.880 [2024-11-19 09:49:45.441969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.880 [2024-11-19 09:49:45.441993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.880 qpair failed and we were unable to recover it. 00:31:58.880 [2024-11-19 09:49:45.451986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.880 [2024-11-19 09:49:45.452042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.880 [2024-11-19 09:49:45.452063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.880 [2024-11-19 09:49:45.452075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.880 [2024-11-19 09:49:45.452086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.880 [2024-11-19 09:49:45.452109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.880 qpair failed and we were unable to recover it. 00:31:58.880 [2024-11-19 09:49:45.461870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.880 [2024-11-19 09:49:45.461926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.880 [2024-11-19 09:49:45.461946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.880 [2024-11-19 09:49:45.461958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.880 [2024-11-19 09:49:45.461970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.880 [2024-11-19 09:49:45.461993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.880 qpair failed and we were unable to recover it. 00:31:58.880 [2024-11-19 09:49:45.472031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.880 [2024-11-19 09:49:45.472082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.880 [2024-11-19 09:49:45.472104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.880 [2024-11-19 09:49:45.472117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.880 [2024-11-19 09:49:45.472128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.881 [2024-11-19 09:49:45.472151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.881 qpair failed and we were unable to recover it. 00:31:58.881 [2024-11-19 09:49:45.482042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.881 [2024-11-19 09:49:45.482098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.881 [2024-11-19 09:49:45.482118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.881 [2024-11-19 09:49:45.482129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.881 [2024-11-19 09:49:45.482140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.881 [2024-11-19 09:49:45.482169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.881 qpair failed and we were unable to recover it. 00:31:58.881 [2024-11-19 09:49:45.492090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.881 [2024-11-19 09:49:45.492151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.881 [2024-11-19 09:49:45.492177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.881 [2024-11-19 09:49:45.492189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.881 [2024-11-19 09:49:45.492200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.881 [2024-11-19 09:49:45.492223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.881 qpair failed and we were unable to recover it. 00:31:58.881 [2024-11-19 09:49:45.502114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.881 [2024-11-19 09:49:45.502167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.881 [2024-11-19 09:49:45.502187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.881 [2024-11-19 09:49:45.502197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.881 [2024-11-19 09:49:45.502208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.881 [2024-11-19 09:49:45.502232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.881 qpair failed and we were unable to recover it. 00:31:58.881 [2024-11-19 09:49:45.512133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.881 [2024-11-19 09:49:45.512183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.881 [2024-11-19 09:49:45.512203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.881 [2024-11-19 09:49:45.512215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.881 [2024-11-19 09:49:45.512225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.881 [2024-11-19 09:49:45.512249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.881 qpair failed and we were unable to recover it. 00:31:58.881 [2024-11-19 09:49:45.522164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.881 [2024-11-19 09:49:45.522217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.881 [2024-11-19 09:49:45.522240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.881 [2024-11-19 09:49:45.522251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.881 [2024-11-19 09:49:45.522262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.881 [2024-11-19 09:49:45.522285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.881 qpair failed and we were unable to recover it. 00:31:58.881 [2024-11-19 09:49:45.532242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.881 [2024-11-19 09:49:45.532316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.881 [2024-11-19 09:49:45.532332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.881 [2024-11-19 09:49:45.532343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.881 [2024-11-19 09:49:45.532354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.881 [2024-11-19 09:49:45.532377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.881 qpair failed and we were unable to recover it. 00:31:58.881 [2024-11-19 09:49:45.542211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.881 [2024-11-19 09:49:45.542297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.881 [2024-11-19 09:49:45.542313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.881 [2024-11-19 09:49:45.542325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.881 [2024-11-19 09:49:45.542336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.881 [2024-11-19 09:49:45.542360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.881 qpair failed and we were unable to recover it. 00:31:58.881 [2024-11-19 09:49:45.552250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.881 [2024-11-19 09:49:45.552330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.881 [2024-11-19 09:49:45.552345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.881 [2024-11-19 09:49:45.552357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.881 [2024-11-19 09:49:45.552368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.881 [2024-11-19 09:49:45.552391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.881 qpair failed and we were unable to recover it. 00:31:58.881 [2024-11-19 09:49:45.562281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.881 [2024-11-19 09:49:45.562336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.881 [2024-11-19 09:49:45.562357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.881 [2024-11-19 09:49:45.562368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.881 [2024-11-19 09:49:45.562380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.881 [2024-11-19 09:49:45.562407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.881 qpair failed and we were unable to recover it. 00:31:58.881 [2024-11-19 09:49:45.572281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.881 [2024-11-19 09:49:45.572385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.881 [2024-11-19 09:49:45.572400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.881 [2024-11-19 09:49:45.572412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.881 [2024-11-19 09:49:45.572422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.881 [2024-11-19 09:49:45.572445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.881 qpair failed and we were unable to recover it. 00:31:58.881 [2024-11-19 09:49:45.582297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.881 [2024-11-19 09:49:45.582349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.881 [2024-11-19 09:49:45.582370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.881 [2024-11-19 09:49:45.582382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.881 [2024-11-19 09:49:45.582393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.881 [2024-11-19 09:49:45.582415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.881 qpair failed and we were unable to recover it. 00:31:58.881 [2024-11-19 09:49:45.592348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.881 [2024-11-19 09:49:45.592396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.881 [2024-11-19 09:49:45.592417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.881 [2024-11-19 09:49:45.592429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.881 [2024-11-19 09:49:45.592441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.881 [2024-11-19 09:49:45.592464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.881 qpair failed and we were unable to recover it. 00:31:58.881 [2024-11-19 09:49:45.602276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.881 [2024-11-19 09:49:45.602325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.881 [2024-11-19 09:49:45.602346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.881 [2024-11-19 09:49:45.602357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.881 [2024-11-19 09:49:45.602368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.882 [2024-11-19 09:49:45.602391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.882 qpair failed and we were unable to recover it. 00:31:58.882 [2024-11-19 09:49:45.612392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.882 [2024-11-19 09:49:45.612456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.882 [2024-11-19 09:49:45.612474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.882 [2024-11-19 09:49:45.612485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.882 [2024-11-19 09:49:45.612496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.882 [2024-11-19 09:49:45.612519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.882 qpair failed and we were unable to recover it. 00:31:58.882 [2024-11-19 09:49:45.622440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:58.882 [2024-11-19 09:49:45.622486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:58.882 [2024-11-19 09:49:45.622505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:58.882 [2024-11-19 09:49:45.622518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:58.882 [2024-11-19 09:49:45.622529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:58.882 [2024-11-19 09:49:45.622552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:58.882 qpair failed and we were unable to recover it. 00:31:59.144 [2024-11-19 09:49:45.632463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.144 [2024-11-19 09:49:45.632514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.144 [2024-11-19 09:49:45.632533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.144 [2024-11-19 09:49:45.632544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.144 [2024-11-19 09:49:45.632556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.144 [2024-11-19 09:49:45.632578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-11-19 09:49:45.642449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.144 [2024-11-19 09:49:45.642502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.144 [2024-11-19 09:49:45.642523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.144 [2024-11-19 09:49:45.642535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.144 [2024-11-19 09:49:45.642546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.144 [2024-11-19 09:49:45.642569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-11-19 09:49:45.652503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.144 [2024-11-19 09:49:45.652556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.144 [2024-11-19 09:49:45.652580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.144 [2024-11-19 09:49:45.652592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.144 [2024-11-19 09:49:45.652603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.144 [2024-11-19 09:49:45.652626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-11-19 09:49:45.662537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.144 [2024-11-19 09:49:45.662630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.144 [2024-11-19 09:49:45.662645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.144 [2024-11-19 09:49:45.662657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.144 [2024-11-19 09:49:45.662668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.144 [2024-11-19 09:49:45.662691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-11-19 09:49:45.672546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.144 [2024-11-19 09:49:45.672602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.144 [2024-11-19 09:49:45.672622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.144 [2024-11-19 09:49:45.672634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.144 [2024-11-19 09:49:45.672645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.144 [2024-11-19 09:49:45.672669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-11-19 09:49:45.682543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.144 [2024-11-19 09:49:45.682598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.144 [2024-11-19 09:49:45.682618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.144 [2024-11-19 09:49:45.682630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.144 [2024-11-19 09:49:45.682641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.144 [2024-11-19 09:49:45.682664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-11-19 09:49:45.692627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.144 [2024-11-19 09:49:45.692681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.144 [2024-11-19 09:49:45.692703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.144 [2024-11-19 09:49:45.692716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.144 [2024-11-19 09:49:45.692731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.144 [2024-11-19 09:49:45.692754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-11-19 09:49:45.702610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.144 [2024-11-19 09:49:45.702660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.144 [2024-11-19 09:49:45.702678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.144 [2024-11-19 09:49:45.702690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.144 [2024-11-19 09:49:45.702701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.144 [2024-11-19 09:49:45.702724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.144 qpair failed and we were unable to recover it. 00:31:59.144 [2024-11-19 09:49:45.712664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.144 [2024-11-19 09:49:45.712723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.144 [2024-11-19 09:49:45.712742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.145 [2024-11-19 09:49:45.712754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.145 [2024-11-19 09:49:45.712765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.145 [2024-11-19 09:49:45.712788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-11-19 09:49:45.722654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.145 [2024-11-19 09:49:45.722709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.145 [2024-11-19 09:49:45.722729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.145 [2024-11-19 09:49:45.722741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.145 [2024-11-19 09:49:45.722752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.145 [2024-11-19 09:49:45.722775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-11-19 09:49:45.732718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.145 [2024-11-19 09:49:45.732776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.145 [2024-11-19 09:49:45.732797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.145 [2024-11-19 09:49:45.732809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.145 [2024-11-19 09:49:45.732820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.145 [2024-11-19 09:49:45.732844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-11-19 09:49:45.742786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.145 [2024-11-19 09:49:45.742834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.145 [2024-11-19 09:49:45.742856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.145 [2024-11-19 09:49:45.742868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.145 [2024-11-19 09:49:45.742880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.145 [2024-11-19 09:49:45.742903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-11-19 09:49:45.752775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.145 [2024-11-19 09:49:45.752833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.145 [2024-11-19 09:49:45.752861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.145 [2024-11-19 09:49:45.752874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.145 [2024-11-19 09:49:45.752886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.145 [2024-11-19 09:49:45.752916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-11-19 09:49:45.762787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.145 [2024-11-19 09:49:45.762837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.145 [2024-11-19 09:49:45.762859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.145 [2024-11-19 09:49:45.762871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.145 [2024-11-19 09:49:45.762881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.145 [2024-11-19 09:49:45.762906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-11-19 09:49:45.772732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.145 [2024-11-19 09:49:45.772825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.145 [2024-11-19 09:49:45.772841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.145 [2024-11-19 09:49:45.772853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.145 [2024-11-19 09:49:45.772864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.145 [2024-11-19 09:49:45.772888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-11-19 09:49:45.782839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.145 [2024-11-19 09:49:45.782906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.145 [2024-11-19 09:49:45.782942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.145 [2024-11-19 09:49:45.782956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.145 [2024-11-19 09:49:45.782968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.145 [2024-11-19 09:49:45.782997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-11-19 09:49:45.792927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.145 [2024-11-19 09:49:45.793007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.145 [2024-11-19 09:49:45.793025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.145 [2024-11-19 09:49:45.793037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.145 [2024-11-19 09:49:45.793048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.145 [2024-11-19 09:49:45.793072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-11-19 09:49:45.802862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.145 [2024-11-19 09:49:45.802922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.145 [2024-11-19 09:49:45.802943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.145 [2024-11-19 09:49:45.802954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.145 [2024-11-19 09:49:45.802965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.145 [2024-11-19 09:49:45.802988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-11-19 09:49:45.812950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.145 [2024-11-19 09:49:45.813007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.145 [2024-11-19 09:49:45.813029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.145 [2024-11-19 09:49:45.813041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.145 [2024-11-19 09:49:45.813052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.145 [2024-11-19 09:49:45.813076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-11-19 09:49:45.822954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.145 [2024-11-19 09:49:45.823018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.145 [2024-11-19 09:49:45.823035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.145 [2024-11-19 09:49:45.823055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.145 [2024-11-19 09:49:45.823066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.145 [2024-11-19 09:49:45.823091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-11-19 09:49:45.832958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.145 [2024-11-19 09:49:45.833009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.145 [2024-11-19 09:49:45.833032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.145 [2024-11-19 09:49:45.833044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.145 [2024-11-19 09:49:45.833056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.145 [2024-11-19 09:49:45.833079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.145 qpair failed and we were unable to recover it. 00:31:59.145 [2024-11-19 09:49:45.843027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.145 [2024-11-19 09:49:45.843081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.145 [2024-11-19 09:49:45.843101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.146 [2024-11-19 09:49:45.843113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.146 [2024-11-19 09:49:45.843125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.146 [2024-11-19 09:49:45.843148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-11-19 09:49:45.853034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.146 [2024-11-19 09:49:45.853089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.146 [2024-11-19 09:49:45.853110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.146 [2024-11-19 09:49:45.853122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.146 [2024-11-19 09:49:45.853133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.146 [2024-11-19 09:49:45.853156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-11-19 09:49:45.863080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.146 [2024-11-19 09:49:45.863131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.146 [2024-11-19 09:49:45.863153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.146 [2024-11-19 09:49:45.863169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.146 [2024-11-19 09:49:45.863181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.146 [2024-11-19 09:49:45.863205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-11-19 09:49:45.872955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.146 [2024-11-19 09:49:45.873003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.146 [2024-11-19 09:49:45.873023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.146 [2024-11-19 09:49:45.873035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.146 [2024-11-19 09:49:45.873048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.146 [2024-11-19 09:49:45.873072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.146 [2024-11-19 09:49:45.883032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.146 [2024-11-19 09:49:45.883084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.146 [2024-11-19 09:49:45.883108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.146 [2024-11-19 09:49:45.883120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.146 [2024-11-19 09:49:45.883131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.146 [2024-11-19 09:49:45.883155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.146 qpair failed and we were unable to recover it. 00:31:59.408 [2024-11-19 09:49:45.893141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.408 [2024-11-19 09:49:45.893201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.408 [2024-11-19 09:49:45.893224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.408 [2024-11-19 09:49:45.893237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.408 [2024-11-19 09:49:45.893248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.408 [2024-11-19 09:49:45.893272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.408 qpair failed and we were unable to recover it. 00:31:59.408 [2024-11-19 09:49:45.903137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.408 [2024-11-19 09:49:45.903191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.408 [2024-11-19 09:49:45.903210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.408 [2024-11-19 09:49:45.903222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.408 [2024-11-19 09:49:45.903234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.408 [2024-11-19 09:49:45.903257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.408 qpair failed and we were unable to recover it. 00:31:59.408 [2024-11-19 09:49:45.913202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.408 [2024-11-19 09:49:45.913261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.408 [2024-11-19 09:49:45.913281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.408 [2024-11-19 09:49:45.913294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.408 [2024-11-19 09:49:45.913305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.408 [2024-11-19 09:49:45.913329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.408 qpair failed and we were unable to recover it. 00:31:59.408 [2024-11-19 09:49:45.923208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.408 [2024-11-19 09:49:45.923273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.408 [2024-11-19 09:49:45.923290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.408 [2024-11-19 09:49:45.923301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.408 [2024-11-19 09:49:45.923312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.408 [2024-11-19 09:49:45.923335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.408 qpair failed and we were unable to recover it. 00:31:59.408 [2024-11-19 09:49:45.933257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.408 [2024-11-19 09:49:45.933308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.408 [2024-11-19 09:49:45.933329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.408 [2024-11-19 09:49:45.933341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.408 [2024-11-19 09:49:45.933353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.408 [2024-11-19 09:49:45.933376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.408 qpair failed and we were unable to recover it. 00:31:59.408 [2024-11-19 09:49:45.943242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.408 [2024-11-19 09:49:45.943296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.408 [2024-11-19 09:49:45.943317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.408 [2024-11-19 09:49:45.943329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.408 [2024-11-19 09:49:45.943340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.408 [2024-11-19 09:49:45.943363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.408 qpair failed and we were unable to recover it. 00:31:59.408 [2024-11-19 09:49:45.953321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.408 [2024-11-19 09:49:45.953369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.408 [2024-11-19 09:49:45.953388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.408 [2024-11-19 09:49:45.953406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.408 [2024-11-19 09:49:45.953417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.408 [2024-11-19 09:49:45.953440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.408 qpair failed and we were unable to recover it. 00:31:59.408 [2024-11-19 09:49:45.963327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.408 [2024-11-19 09:49:45.963384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.408 [2024-11-19 09:49:45.963404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.408 [2024-11-19 09:49:45.963416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.408 [2024-11-19 09:49:45.963427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.408 [2024-11-19 09:49:45.963450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.408 qpair failed and we were unable to recover it. 00:31:59.408 [2024-11-19 09:49:45.973233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.408 [2024-11-19 09:49:45.973283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.408 [2024-11-19 09:49:45.973304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.408 [2024-11-19 09:49:45.973316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.408 [2024-11-19 09:49:45.973327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.408 [2024-11-19 09:49:45.973349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.408 qpair failed and we were unable to recover it. 00:31:59.408 [2024-11-19 09:49:45.983389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.408 [2024-11-19 09:49:45.983443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.408 [2024-11-19 09:49:45.983464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.408 [2024-11-19 09:49:45.983476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.408 [2024-11-19 09:49:45.983487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.408 [2024-11-19 09:49:45.983510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.408 qpair failed and we were unable to recover it. 00:31:59.408 [2024-11-19 09:49:45.993421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.408 [2024-11-19 09:49:45.993476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.408 [2024-11-19 09:49:45.993496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.408 [2024-11-19 09:49:45.993508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.408 [2024-11-19 09:49:45.993519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.408 [2024-11-19 09:49:45.993547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.408 qpair failed and we were unable to recover it. 00:31:59.408 [2024-11-19 09:49:46.003483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.409 [2024-11-19 09:49:46.003537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.409 [2024-11-19 09:49:46.003558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.409 [2024-11-19 09:49:46.003571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.409 [2024-11-19 09:49:46.003582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.409 [2024-11-19 09:49:46.003605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.409 qpair failed and we were unable to recover it. 00:31:59.409 [2024-11-19 09:49:46.013494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.409 [2024-11-19 09:49:46.013544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.409 [2024-11-19 09:49:46.013563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.409 [2024-11-19 09:49:46.013574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.409 [2024-11-19 09:49:46.013585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.409 [2024-11-19 09:49:46.013608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.409 qpair failed and we were unable to recover it. 00:31:59.409 [2024-11-19 09:49:46.023483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.409 [2024-11-19 09:49:46.023578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.409 [2024-11-19 09:49:46.023595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.409 [2024-11-19 09:49:46.023606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.409 [2024-11-19 09:49:46.023617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.409 [2024-11-19 09:49:46.023641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.409 qpair failed and we were unable to recover it. 00:31:59.409 [2024-11-19 09:49:46.033510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.409 [2024-11-19 09:49:46.033562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.409 [2024-11-19 09:49:46.033582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.409 [2024-11-19 09:49:46.033594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.409 [2024-11-19 09:49:46.033606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.409 [2024-11-19 09:49:46.033629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.409 qpair failed and we were unable to recover it. 00:31:59.409 [2024-11-19 09:49:46.043524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.409 [2024-11-19 09:49:46.043612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.409 [2024-11-19 09:49:46.043627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.409 [2024-11-19 09:49:46.043638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.409 [2024-11-19 09:49:46.043649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.409 [2024-11-19 09:49:46.043673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.409 qpair failed and we were unable to recover it. 00:31:59.409 [2024-11-19 09:49:46.053566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.409 [2024-11-19 09:49:46.053620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.409 [2024-11-19 09:49:46.053643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.409 [2024-11-19 09:49:46.053655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.409 [2024-11-19 09:49:46.053666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.409 [2024-11-19 09:49:46.053689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.409 qpair failed and we were unable to recover it. 00:31:59.409 [2024-11-19 09:49:46.063616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.409 [2024-11-19 09:49:46.063669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.409 [2024-11-19 09:49:46.063690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.409 [2024-11-19 09:49:46.063703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.409 [2024-11-19 09:49:46.063714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.409 [2024-11-19 09:49:46.063737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.409 qpair failed and we were unable to recover it. 00:31:59.409 [2024-11-19 09:49:46.073638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.409 [2024-11-19 09:49:46.073692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.409 [2024-11-19 09:49:46.073711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.409 [2024-11-19 09:49:46.073724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.409 [2024-11-19 09:49:46.073735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.409 [2024-11-19 09:49:46.073759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.409 qpair failed and we were unable to recover it. 00:31:59.409 [2024-11-19 09:49:46.083627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.409 [2024-11-19 09:49:46.083679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.409 [2024-11-19 09:49:46.083707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.409 [2024-11-19 09:49:46.083719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.409 [2024-11-19 09:49:46.083730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.409 [2024-11-19 09:49:46.083753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.409 qpair failed and we were unable to recover it. 00:31:59.410 [2024-11-19 09:49:46.093689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.410 [2024-11-19 09:49:46.093785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.410 [2024-11-19 09:49:46.093801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.410 [2024-11-19 09:49:46.093813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.410 [2024-11-19 09:49:46.093824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.410 [2024-11-19 09:49:46.093848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.410 qpair failed and we were unable to recover it. 00:31:59.410 [2024-11-19 09:49:46.103707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.410 [2024-11-19 09:49:46.103764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.410 [2024-11-19 09:49:46.103785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.410 [2024-11-19 09:49:46.103797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.410 [2024-11-19 09:49:46.103808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.410 [2024-11-19 09:49:46.103831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.410 qpair failed and we were unable to recover it. 00:31:59.410 [2024-11-19 09:49:46.113746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.410 [2024-11-19 09:49:46.113803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.410 [2024-11-19 09:49:46.113831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.410 [2024-11-19 09:49:46.113845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.410 [2024-11-19 09:49:46.113856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.410 [2024-11-19 09:49:46.113884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.410 qpair failed and we were unable to recover it. 00:31:59.410 [2024-11-19 09:49:46.123747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.410 [2024-11-19 09:49:46.123829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.410 [2024-11-19 09:49:46.123847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.410 [2024-11-19 09:49:46.123858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.410 [2024-11-19 09:49:46.123870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.410 [2024-11-19 09:49:46.123900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.410 qpair failed and we were unable to recover it. 00:31:59.410 [2024-11-19 09:49:46.133800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.410 [2024-11-19 09:49:46.133852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.410 [2024-11-19 09:49:46.133874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.410 [2024-11-19 09:49:46.133888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.410 [2024-11-19 09:49:46.133899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.410 [2024-11-19 09:49:46.133923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.410 qpair failed and we were unable to recover it. 00:31:59.410 [2024-11-19 09:49:46.143812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.410 [2024-11-19 09:49:46.143861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.410 [2024-11-19 09:49:46.143882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.410 [2024-11-19 09:49:46.143894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.410 [2024-11-19 09:49:46.143905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.410 [2024-11-19 09:49:46.143928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.410 qpair failed and we were unable to recover it. 00:31:59.671 [2024-11-19 09:49:46.153861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.671 [2024-11-19 09:49:46.153912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.671 [2024-11-19 09:49:46.153933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.671 [2024-11-19 09:49:46.153945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.671 [2024-11-19 09:49:46.153956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.671 [2024-11-19 09:49:46.153979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.671 qpair failed and we were unable to recover it. 00:31:59.671 [2024-11-19 09:49:46.163869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.671 [2024-11-19 09:49:46.163922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.671 [2024-11-19 09:49:46.163944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.671 [2024-11-19 09:49:46.163956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.671 [2024-11-19 09:49:46.163967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.671 [2024-11-19 09:49:46.163990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.671 qpair failed and we were unable to recover it. 00:31:59.671 [2024-11-19 09:49:46.173900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.671 [2024-11-19 09:49:46.173955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.671 [2024-11-19 09:49:46.173974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.671 [2024-11-19 09:49:46.173986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.671 [2024-11-19 09:49:46.173997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.671 [2024-11-19 09:49:46.174020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.671 qpair failed and we were unable to recover it. 00:31:59.671 [2024-11-19 09:49:46.183927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.671 [2024-11-19 09:49:46.183974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.671 [2024-11-19 09:49:46.183993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.671 [2024-11-19 09:49:46.184005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.671 [2024-11-19 09:49:46.184016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.671 [2024-11-19 09:49:46.184039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.671 qpair failed and we were unable to recover it. 00:31:59.671 [2024-11-19 09:49:46.193952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.671 [2024-11-19 09:49:46.194006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.671 [2024-11-19 09:49:46.194027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.671 [2024-11-19 09:49:46.194039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.671 [2024-11-19 09:49:46.194050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.671 [2024-11-19 09:49:46.194073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.671 qpair failed and we were unable to recover it. 00:31:59.671 [2024-11-19 09:49:46.203977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.671 [2024-11-19 09:49:46.204035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.671 [2024-11-19 09:49:46.204056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.671 [2024-11-19 09:49:46.204068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.672 [2024-11-19 09:49:46.204079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.672 [2024-11-19 09:49:46.204103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.672 qpair failed and we were unable to recover it. 00:31:59.672 [2024-11-19 09:49:46.214027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.672 [2024-11-19 09:49:46.214079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.672 [2024-11-19 09:49:46.214104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.672 [2024-11-19 09:49:46.214116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.672 [2024-11-19 09:49:46.214127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.672 [2024-11-19 09:49:46.214152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.672 qpair failed and we were unable to recover it. 00:31:59.672 [2024-11-19 09:49:46.224011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.672 [2024-11-19 09:49:46.224058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.672 [2024-11-19 09:49:46.224077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.672 [2024-11-19 09:49:46.224089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.672 [2024-11-19 09:49:46.224100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.672 [2024-11-19 09:49:46.224123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.672 qpair failed and we were unable to recover it. 00:31:59.672 [2024-11-19 09:49:46.234053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.672 [2024-11-19 09:49:46.234105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.672 [2024-11-19 09:49:46.234126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.672 [2024-11-19 09:49:46.234139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.672 [2024-11-19 09:49:46.234150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.672 [2024-11-19 09:49:46.234178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.672 qpair failed and we were unable to recover it. 00:31:59.672 [2024-11-19 09:49:46.244108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:59.672 [2024-11-19 09:49:46.244169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:59.672 [2024-11-19 09:49:46.244190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:59.672 [2024-11-19 09:49:46.244202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:59.672 [2024-11-19 09:49:46.244213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1414000b90 00:31:59.672 [2024-11-19 09:49:46.244237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:59.672 qpair failed and we were unable to recover it. 00:31:59.672 [2024-11-19 09:49:46.244432] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:31:59.672 A controller has encountered a failure and is being reset. 00:31:59.672 [2024-11-19 09:49:46.244549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb38e00 (9): Bad file descriptor 00:31:59.672 Controller properly reset. 00:31:59.672 Initializing NVMe Controllers 00:31:59.672 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:59.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:59.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:59.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:59.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:59.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:59.672 Initialization complete. Launching workers. 00:31:59.672 Starting thread on core 1 00:31:59.672 Starting thread on core 2 00:31:59.672 Starting thread on core 3 00:31:59.672 Starting thread on core 0 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:59.672 00:31:59.672 real 0m11.364s 00:31:59.672 user 0m22.086s 00:31:59.672 sys 0m3.747s 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:59.672 ************************************ 00:31:59.672 END TEST nvmf_target_disconnect_tc2 00:31:59.672 ************************************ 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:59.672 rmmod nvme_tcp 00:31:59.672 rmmod nvme_fabrics 00:31:59.672 rmmod nvme_keyring 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 528835 ']' 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 528835 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 528835 ']' 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 528835 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:31:59.672 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 528835 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 528835' 00:31:59.933 killing process with pid 528835 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 528835 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 528835 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.933 09:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.476 09:49:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:02.476 00:32:02.476 real 0m21.712s 00:32:02.476 user 0m49.609s 00:32:02.476 sys 0m9.908s 00:32:02.476 09:49:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:02.476 09:49:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:02.476 ************************************ 00:32:02.476 END TEST nvmf_target_disconnect 00:32:02.476 ************************************ 00:32:02.477 09:49:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:02.477 00:32:02.477 real 6m35.258s 00:32:02.477 user 11m33.872s 00:32:02.477 sys 2m14.708s 00:32:02.477 09:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:02.477 09:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.477 ************************************ 00:32:02.477 END TEST nvmf_host 00:32:02.477 ************************************ 00:32:02.477 09:49:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:32:02.477 09:49:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:32:02.477 09:49:48 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:02.477 09:49:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:02.477 09:49:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:02.477 09:49:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:02.477 ************************************ 00:32:02.477 START TEST nvmf_target_core_interrupt_mode 00:32:02.477 ************************************ 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:02.477 * Looking for test storage... 00:32:02.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:02.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.477 --rc genhtml_branch_coverage=1 00:32:02.477 --rc genhtml_function_coverage=1 00:32:02.477 --rc genhtml_legend=1 00:32:02.477 --rc geninfo_all_blocks=1 00:32:02.477 --rc geninfo_unexecuted_blocks=1 00:32:02.477 00:32:02.477 ' 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:02.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.477 --rc genhtml_branch_coverage=1 00:32:02.477 --rc genhtml_function_coverage=1 00:32:02.477 --rc genhtml_legend=1 00:32:02.477 --rc geninfo_all_blocks=1 00:32:02.477 --rc geninfo_unexecuted_blocks=1 00:32:02.477 00:32:02.477 ' 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:02.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.477 --rc genhtml_branch_coverage=1 00:32:02.477 --rc genhtml_function_coverage=1 00:32:02.477 --rc genhtml_legend=1 00:32:02.477 --rc geninfo_all_blocks=1 00:32:02.477 --rc geninfo_unexecuted_blocks=1 00:32:02.477 00:32:02.477 ' 00:32:02.477 09:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:02.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.477 --rc genhtml_branch_coverage=1 00:32:02.477 --rc genhtml_function_coverage=1 00:32:02.477 --rc genhtml_legend=1 00:32:02.477 --rc geninfo_all_blocks=1 00:32:02.477 --rc geninfo_unexecuted_blocks=1 00:32:02.477 00:32:02.477 ' 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.477 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:02.478 ************************************ 00:32:02.478 START TEST nvmf_abort 00:32:02.478 ************************************ 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:02.478 * Looking for test storage... 00:32:02.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:32:02.478 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:02.765 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:02.765 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:02.765 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:02.765 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:02.765 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:32:02.765 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:32:02.765 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:32:02.765 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:32:02.765 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:32:02.765 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:32:02.765 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:32:02.765 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:02.765 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:32:02.765 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:02.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.766 --rc genhtml_branch_coverage=1 00:32:02.766 --rc genhtml_function_coverage=1 00:32:02.766 --rc genhtml_legend=1 00:32:02.766 --rc geninfo_all_blocks=1 00:32:02.766 --rc geninfo_unexecuted_blocks=1 00:32:02.766 00:32:02.766 ' 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:02.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.766 --rc genhtml_branch_coverage=1 00:32:02.766 --rc genhtml_function_coverage=1 00:32:02.766 --rc genhtml_legend=1 00:32:02.766 --rc geninfo_all_blocks=1 00:32:02.766 --rc geninfo_unexecuted_blocks=1 00:32:02.766 00:32:02.766 ' 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:02.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.766 --rc genhtml_branch_coverage=1 00:32:02.766 --rc genhtml_function_coverage=1 00:32:02.766 --rc genhtml_legend=1 00:32:02.766 --rc geninfo_all_blocks=1 00:32:02.766 --rc geninfo_unexecuted_blocks=1 00:32:02.766 00:32:02.766 ' 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:02.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.766 --rc genhtml_branch_coverage=1 00:32:02.766 --rc genhtml_function_coverage=1 00:32:02.766 --rc genhtml_legend=1 00:32:02.766 --rc geninfo_all_blocks=1 00:32:02.766 --rc geninfo_unexecuted_blocks=1 00:32:02.766 00:32:02.766 ' 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:32:02.766 09:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:10.908 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:10.908 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:32:10.908 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:10.908 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:10.908 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:10.908 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:10.908 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:10.908 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:32:10.908 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:10.908 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:32:10.908 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:32:10.908 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:32:10.908 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:32:10.908 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:32:10.908 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:32:10.908 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:10.909 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:10.909 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:10.909 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:10.909 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:10.909 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:10.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:10.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:32:10.910 00:32:10.910 --- 10.0.0.2 ping statistics --- 00:32:10.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.910 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:10.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:10.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:32:10.910 00:32:10.910 --- 10.0.0.1 ping statistics --- 00:32:10.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.910 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=534417 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 534417 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 534417 ']' 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:10.910 09:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:10.910 [2024-11-19 09:49:56.850552] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:10.910 [2024-11-19 09:49:56.851697] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:32:10.910 [2024-11-19 09:49:56.851754] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:10.910 [2024-11-19 09:49:56.952803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:10.910 [2024-11-19 09:49:57.004323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:10.910 [2024-11-19 09:49:57.004375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:10.910 [2024-11-19 09:49:57.004384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:10.910 [2024-11-19 09:49:57.004391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:10.910 [2024-11-19 09:49:57.004398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:10.910 [2024-11-19 09:49:57.006495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:10.910 [2024-11-19 09:49:57.006655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.910 [2024-11-19 09:49:57.006655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:10.910 [2024-11-19 09:49:57.083582] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:10.910 [2024-11-19 09:49:57.084578] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:10.910 [2024-11-19 09:49:57.085125] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:10.910 [2024-11-19 09:49:57.085273] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:11.171 [2024-11-19 09:49:57.707586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:11.171 Malloc0 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:11.171 Delay0 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:11.171 [2024-11-19 09:49:57.803508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.171 09:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:32:11.433 [2024-11-19 09:49:57.986261] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:13.979 Initializing NVMe Controllers 00:32:13.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:13.979 controller IO queue size 128 less than required 00:32:13.979 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:32:13.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:32:13.979 Initialization complete. Launching workers. 00:32:13.979 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28726 00:32:13.979 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28783, failed to submit 66 00:32:13.979 success 28726, unsuccessful 57, failed 0 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:13.979 rmmod nvme_tcp 00:32:13.979 rmmod nvme_fabrics 00:32:13.979 rmmod nvme_keyring 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 534417 ']' 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 534417 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 534417 ']' 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 534417 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 534417 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 534417' 00:32:13.979 killing process with pid 534417 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 534417 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 534417 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:13.979 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.980 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:13.980 09:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.896 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:15.896 00:32:15.896 real 0m13.490s 00:32:15.896 user 0m11.416s 00:32:15.896 sys 0m6.999s 00:32:15.896 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:15.896 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:15.896 ************************************ 00:32:15.896 END TEST nvmf_abort 00:32:15.896 ************************************ 00:32:15.896 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:15.896 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:15.896 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:15.896 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:16.158 ************************************ 00:32:16.158 START TEST nvmf_ns_hotplug_stress 00:32:16.158 ************************************ 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:16.158 * Looking for test storage... 00:32:16.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:16.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.158 --rc genhtml_branch_coverage=1 00:32:16.158 --rc genhtml_function_coverage=1 00:32:16.158 --rc genhtml_legend=1 00:32:16.158 --rc geninfo_all_blocks=1 00:32:16.158 --rc geninfo_unexecuted_blocks=1 00:32:16.158 00:32:16.158 ' 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:16.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.158 --rc genhtml_branch_coverage=1 00:32:16.158 --rc genhtml_function_coverage=1 00:32:16.158 --rc genhtml_legend=1 00:32:16.158 --rc geninfo_all_blocks=1 00:32:16.158 --rc geninfo_unexecuted_blocks=1 00:32:16.158 00:32:16.158 ' 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:16.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.158 --rc genhtml_branch_coverage=1 00:32:16.158 --rc genhtml_function_coverage=1 00:32:16.158 --rc genhtml_legend=1 00:32:16.158 --rc geninfo_all_blocks=1 00:32:16.158 --rc geninfo_unexecuted_blocks=1 00:32:16.158 00:32:16.158 ' 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:16.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.158 --rc genhtml_branch_coverage=1 00:32:16.158 --rc genhtml_function_coverage=1 00:32:16.158 --rc genhtml_legend=1 00:32:16.158 --rc geninfo_all_blocks=1 00:32:16.158 --rc geninfo_unexecuted_blocks=1 00:32:16.158 00:32:16.158 ' 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:16.158 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.159 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.420 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:16.420 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:16.420 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:32:16.420 09:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:24.560 09:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:24.560 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:24.560 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.560 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:24.560 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:24.561 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:24.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:24.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:32:24.561 00:32:24.561 --- 10.0.0.2 ping statistics --- 00:32:24.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.561 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:24.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:24.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:32:24.561 00:32:24.561 --- 10.0.0.1 ping statistics --- 00:32:24.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.561 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=539274 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 539274 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 539274 ']' 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:24.561 09:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:24.561 [2024-11-19 09:50:10.419691] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:24.561 [2024-11-19 09:50:10.420803] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:32:24.561 [2024-11-19 09:50:10.420855] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:24.561 [2024-11-19 09:50:10.520029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:24.561 [2024-11-19 09:50:10.571348] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:24.561 [2024-11-19 09:50:10.571396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:24.561 [2024-11-19 09:50:10.571405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:24.561 [2024-11-19 09:50:10.571412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:24.561 [2024-11-19 09:50:10.571419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:24.561 [2024-11-19 09:50:10.573231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:24.561 [2024-11-19 09:50:10.573427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.561 [2024-11-19 09:50:10.573427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:24.561 [2024-11-19 09:50:10.649288] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:24.561 [2024-11-19 09:50:10.650220] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:24.561 [2024-11-19 09:50:10.650682] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:24.561 [2024-11-19 09:50:10.650827] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:24.561 09:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:24.561 09:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:32:24.561 09:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:24.561 09:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:24.561 09:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:24.561 09:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:24.561 09:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:32:24.562 09:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:24.823 [2024-11-19 09:50:11.438339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:24.823 09:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:25.084 09:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:25.084 [2024-11-19 09:50:11.807091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.084 09:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:25.344 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:32:25.605 Malloc0 00:32:25.605 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:25.866 Delay0 00:32:25.866 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:26.126 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:32:26.126 NULL1 00:32:26.126 09:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:32:26.388 09:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=539687 00:32:26.388 09:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:26.388 09:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:32:26.388 09:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:27.774 Read completed with error (sct=0, sc=11) 00:32:27.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:27.774 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:27.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:27.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:27.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:27.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:27.774 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:32:27.774 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:32:28.036 true 00:32:28.036 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:28.036 09:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:28.977 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:28.977 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:32:28.977 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:32:29.237 true 00:32:29.237 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:29.237 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:29.497 09:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:29.497 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:32:29.497 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:32:29.758 true 00:32:29.758 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:29.758 09:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:31.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.143 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:31.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.143 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:32:31.143 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:32:31.403 true 00:32:31.403 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:31.403 09:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:32.345 09:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:32.345 09:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:32:32.345 09:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:32:32.614 true 00:32:32.614 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:32.614 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:32.614 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:32.875 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:32:32.875 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:32:33.135 true 00:32:33.135 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:33.135 09:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:34.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:34.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:34.075 09:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:34.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:34.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:34.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:34.337 09:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:32:34.337 09:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:32:34.597 true 00:32:34.597 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:34.597 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:34.858 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:34.858 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:32:34.858 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:32:35.118 true 00:32:35.118 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:35.118 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:35.379 09:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:35.379 09:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:32:35.379 09:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:32:35.640 true 00:32:35.640 09:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:35.640 09:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:35.900 09:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:35.900 09:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:32:35.900 09:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:32:36.162 true 00:32:36.162 09:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:36.162 09:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:37.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:37.547 09:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:37.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:37.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:37.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:37.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:37.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:37.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:37.547 09:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:32:37.547 09:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:32:37.808 true 00:32:37.808 09:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:37.808 09:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:38.750 09:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:38.750 09:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:32:38.750 09:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:32:39.010 true 00:32:39.010 09:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:39.010 09:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:39.010 09:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:39.271 09:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:32:39.271 09:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:32:39.531 true 00:32:39.531 09:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:39.531 09:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:40.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.471 09:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:40.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.731 09:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:32:40.731 09:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:32:40.991 true 00:32:40.991 09:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:40.991 09:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:41.932 09:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:41.932 09:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:32:41.932 09:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:32:42.193 true 00:32:42.193 09:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:42.193 09:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:42.454 09:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:42.454 09:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:32:42.454 09:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:32:42.713 true 00:32:42.713 09:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:42.713 09:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:44.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:44.094 09:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:44.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:44.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:44.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:44.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:44.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:44.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:44.095 09:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:32:44.095 09:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:32:44.355 true 00:32:44.355 09:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:44.355 09:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:45.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:45.298 09:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:45.298 09:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:32:45.298 09:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:32:45.558 true 00:32:45.558 09:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:45.558 09:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.558 09:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:45.819 09:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:32:45.819 09:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:32:46.080 true 00:32:46.080 09:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:46.080 09:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:46.080 09:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:46.341 09:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:32:46.341 09:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:32:46.602 true 00:32:46.602 09:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:46.602 09:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:46.602 09:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:46.862 09:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:32:46.862 09:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:32:47.123 true 00:32:47.123 09:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:47.123 09:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:47.383 09:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:47.383 09:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:32:47.383 09:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:32:47.643 true 00:32:47.643 09:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:47.643 09:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:47.903 09:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:47.903 09:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:32:47.903 09:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:32:48.164 true 00:32:48.164 09:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:48.164 09:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:49.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:49.546 09:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:49.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:49.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:49.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:49.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:49.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:49.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:49.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:49.546 09:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:32:49.546 09:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:32:49.806 true 00:32:49.806 09:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:49.806 09:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:50.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:50.746 09:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:50.746 09:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:32:50.746 09:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:32:51.005 true 00:32:51.005 09:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:51.005 09:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:51.005 09:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:51.266 09:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:32:51.266 09:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:32:51.526 true 00:32:51.526 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:51.526 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:51.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:51.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:51.799 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:51.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:51.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:51.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:51.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:51.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:51.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:51.799 [2024-11-19 09:50:38.452348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.799 [2024-11-19 09:50:38.452401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.799 [2024-11-19 09:50:38.452441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.799 [2024-11-19 09:50:38.452480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.799 [2024-11-19 09:50:38.452507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.799 [2024-11-19 09:50:38.452536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.799 [2024-11-19 09:50:38.452568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.799 [2024-11-19 09:50:38.452598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.452628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.452659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.452690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.452719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.452748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.452781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.452823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.452853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.452881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.452908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.452940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.452969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.452999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.453973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.454985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.800 [2024-11-19 09:50:38.455669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.455699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.455731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.455762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.455789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.455816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.455847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.455972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.456999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.457997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.458994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.459022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.459052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.459085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.459114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.459142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.459172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.459204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.459242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.801 [2024-11-19 09:50:38.459270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.459982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.460012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.460040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.460066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.460094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.460122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.460148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.460176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.460209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.460235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.460266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.460297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.460331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.460884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.460916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.460947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.460978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.461994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.802 [2024-11-19 09:50:38.462669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.462697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.462727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.462755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.462897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.462928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.462957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.462990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.463968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.464954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.465993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.466022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.466051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.466081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.466109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.466140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.466175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.466205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.466237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.803 [2024-11-19 09:50:38.466267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.466986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.467839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.468986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.469013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.469039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.469067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.469095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.469119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.469153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.469185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.469212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.469239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.469267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.469296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.469325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.469352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.469381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.804 [2024-11-19 09:50:38.469409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.469975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.470902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.471996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.805 [2024-11-19 09:50:38.472746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.472775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.472808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.472837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.472896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.472924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.472955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.472986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.473981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.474976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.475991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.476020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.806 [2024-11-19 09:50:38.476049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.476910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.477999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.478986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.479014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.479045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.479076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.479468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.479500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.479530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.479556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.479583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.479610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.807 [2024-11-19 09:50:38.479643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.479671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.479699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.479728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.479759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.479788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.479815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.479843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.479872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.479905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.479933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.479959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.479988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.480973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.481974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.482415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.482444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.482474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.482510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.482540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.482570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.482599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.482628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.482659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.482691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.482721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.482755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.482783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.808 [2024-11-19 09:50:38.482813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.482847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.482881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.482909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.482936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.482966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.482993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.483942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.484001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.484030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.484058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.484085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.484113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.484141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.484172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.484202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.484229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.484257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.484283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.484309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.484838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.484871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:32:51.809 [2024-11-19 09:50:38.484904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.484933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.484964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.484995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.485994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.486026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.486060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.486090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.486126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.486153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.486188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.486218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.486249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.486276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.486305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.809 [2024-11-19 09:50:38.486333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.486992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.487984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.488942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.489523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.489560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.489585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.489620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.489651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.489683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.489718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.489753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.489787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.489816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.489851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.489882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.489913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.489943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.489977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.810 [2024-11-19 09:50:38.490008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.490985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.491991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:32:51.811 [2024-11-19 09:50:38.492862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.492977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.493008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.493035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.493062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.493097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.493125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.493153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.493186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.493231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:32:51.811 [2024-11-19 09:50:38.493267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.493300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.493332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.493362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.493394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.811 [2024-11-19 09:50:38.493424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.493455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.493489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.493519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.493547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.493577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.493609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.493642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.493673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.493706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.493737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.493784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.493816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.493845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.493876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.493904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.493935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.494972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.495983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.496774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.497282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.812 [2024-11-19 09:50:38.497316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.497988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.498974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.499974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.500001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.500031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.500058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.500092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.500121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.500150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.500184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.500215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.500245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.500272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.813 [2024-11-19 09:50:38.500299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.500992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.501997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.502975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.814 [2024-11-19 09:50:38.503591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.503622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.503650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.503682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.504986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.505972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.506727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.507023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.507057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.507088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.507121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.507155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.507192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.507228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.507259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.507291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.507343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.507374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.507410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.507447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.507475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.815 [2024-11-19 09:50:38.507502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.507531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.507558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.507588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.507620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.507647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.507683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.507714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.507739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.507772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.507802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.507834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.507862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.507900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.507930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.507961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.507989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.508997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.509964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.510652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.816 [2024-11-19 09:50:38.511217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.511990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.512971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.513989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.514021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.514059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.514090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.514118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.817 [2024-11-19 09:50:38.514148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.514186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.514216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.514247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.514277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.514305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.514339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.514700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.514732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.514773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.514803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.514833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.514862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.514893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.514926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.514958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.514989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.515979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.516999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.818 [2024-11-19 09:50:38.517622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.517654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.517692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.517726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.517758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.517789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.518977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.519998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.520976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.521007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.521041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.521073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:32:51.819 [2024-11-19 09:50:38.521102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.521143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.521180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.819 [2024-11-19 09:50:38.521211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.521944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.522981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.523984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.820 [2024-11-19 09:50:38.524684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.524716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.524746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.525978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.526973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.527985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.528013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.528044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.528084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.528115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.528149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.528188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.528217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.528279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.528309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.528338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.528371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.821 [2024-11-19 09:50:38.528402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.528972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.529989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.530974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.531002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.531031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.531064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.531096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.531127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.531156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.531196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.531226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.531258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.531288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.531417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.531452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.531479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.531516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.531551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.822 [2024-11-19 09:50:38.531581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.823 [2024-11-19 09:50:38.531615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.823 [2024-11-19 09:50:38.531644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.823 [2024-11-19 09:50:38.531673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.823 [2024-11-19 09:50:38.531701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.823 [2024-11-19 09:50:38.531730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.823 [2024-11-19 09:50:38.531765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:51.823 [2024-11-19 09:50:38.531796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.532992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.533982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.534988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.108 [2024-11-19 09:50:38.535579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.535609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.535639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.535669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.535698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.535724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.535753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.535779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.535807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.535836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.535874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.535907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.535938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.535967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.535996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.536981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.537994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.109 [2024-11-19 09:50:38.538904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.538934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.538963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.539982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.540981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.541997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.542023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.542053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.542088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.542123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.542154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.542188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.542217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.542251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.542278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.542307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.542335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.542366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.542680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.110 [2024-11-19 09:50:38.542711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.542743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.542777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.542806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.542837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.542868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.542896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.542928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.542959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.542988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.543995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.544832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.545994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.546020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.546051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.546080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.546112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.546146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.111 [2024-11-19 09:50:38.546209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.546983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.547703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.548984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.549014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.549042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.549070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.549108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.549143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.549173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.549201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.549230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.549258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.549287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.112 [2024-11-19 09:50:38.549323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.549971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.550756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.551981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.113 [2024-11-19 09:50:38.552791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.552825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.552855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.552885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.552918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.552950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.552990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.553978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.554591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.555997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.556027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.556059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.556092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.556119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.556147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.556184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.556221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.556253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.556287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.556316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.556341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.556372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.556402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.556430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.556461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.114 [2024-11-19 09:50:38.556491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.556522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.556561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.556591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.556618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.556649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.556676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.556705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.556733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.556767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.556797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.556825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.556854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.556883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.556913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.556949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.556979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:32:52.115 [2024-11-19 09:50:38.557656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.557980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.558983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.115 [2024-11-19 09:50:38.559979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.560973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.561674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.562982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.116 [2024-11-19 09:50:38.563646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.563677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.563709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.563744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.563776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.563811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.563841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.563876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.563906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.563936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.563961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.563985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.564981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.565972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.117 [2024-11-19 09:50:38.566893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.566928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.566960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.566989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.567973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.568738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.569993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.570016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.570041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.570065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.570089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.570112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.570147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.570181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.570211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.570242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.570273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.570304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.570334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.118 [2024-11-19 09:50:38.570364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.570970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.571971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.572994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.573962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.119 [2024-11-19 09:50:38.574001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.574988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.575963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.576969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.577000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.577031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.577070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.577103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.577131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.577162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.577197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.577224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.577252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.577282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.577312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.120 [2024-11-19 09:50:38.577347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.577996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.578028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.578057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.578092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.578128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.578269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.578302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.578338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.578373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.578404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.578438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.578477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.578507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.578540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.579992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.121 [2024-11-19 09:50:38.580730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.580760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.580790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.580824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.580854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.580915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.580945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.580976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.581976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.582998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.583981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.584011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.584044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.584074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.584130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.584168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.584197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.584227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.584265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.584297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.584328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.584360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.584389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.122 [2024-11-19 09:50:38.584422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.584997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.585605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.586972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.123 [2024-11-19 09:50:38.587994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.588750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.589974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.590988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.591123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.591157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.591193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.591227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.591256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.591286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.591318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.591348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.591381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.591405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.591439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.591472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.124 [2024-11-19 09:50:38.591501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.591532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.591562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.591592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.591621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.591651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.591682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.591711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.591742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.591769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.591795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.591825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.591854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.591886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.591921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.591952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.591984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.592969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.593986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.125 [2024-11-19 09:50:38.594959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.595095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.595125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.595162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.595197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:32:52.126 [2024-11-19 09:50:38.595230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.595261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.595291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.595324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.595353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.595385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.595418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.595446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.595478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.595510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.595541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.595566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.595999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.596983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.597990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.598020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.598223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.598262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.598290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.598319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.598352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.598377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.126 [2024-11-19 09:50:38.598408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.598979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.599978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.600983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.127 [2024-11-19 09:50:38.601905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.601934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.601966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.601996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.602590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.603971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.604976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.128 [2024-11-19 09:50:38.605678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.605708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.605738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.605764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.605800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.606986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.607975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.608961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.609000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.609032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.609058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.609089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.609113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.609138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.129 [2024-11-19 09:50:38.609168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.609195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.609228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.609258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.609290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.609316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.609347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.609376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.609408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.609443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.609474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.609505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.609534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.609566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.609611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.609643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.609673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.609703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.610976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.611994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.612024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.612054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.612084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.612119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.612150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.612283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.612318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.612346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.612400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.612431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.612461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.130 [2024-11-19 09:50:38.612491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.612524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.612556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.612591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.612624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.612653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.612681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.612709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.612741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.612775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.613971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.614984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.615971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.131 [2024-11-19 09:50:38.616004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.616713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.617978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.618982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.132 [2024-11-19 09:50:38.619737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.619767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.619793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.619832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.619864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.619890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.619919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.619954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.619980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.620010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.620041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.620066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.620090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.620114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.620138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.620167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.620194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.620225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.620255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.620287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.620319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.620350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.620877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.620904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.620929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.620953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.621987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.622851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.623357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.623395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.623428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.623459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.623493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.623523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.623554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.623585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.623613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.623647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.623678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.623712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.133 [2024-11-19 09:50:38.623742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.623779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.623811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.623845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.623875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.623905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.623937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.623966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.623999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.624964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.625975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.626970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.627001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.627033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.627062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.627094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.627122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.627151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.627186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.134 [2024-11-19 09:50:38.627214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.627982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.628978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.629820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.630196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.630234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.630264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.630298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.630327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.630355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.135 [2024-11-19 09:50:38.630390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.630989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.631950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 Message suppressed 999 times: [2024-11-19 09:50:38.632390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 Read completed with error (sct=0, sc=15) 00:32:52.136 [2024-11-19 09:50:38.632424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.632787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.136 [2024-11-19 09:50:38.633953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.633983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.634981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.635979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.636965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.637001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.637030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.637066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.637094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.637122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.137 [2024-11-19 09:50:38.637154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.637187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.637217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.637249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.637279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.637315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.637688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.637720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.637754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.637786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.637815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.637846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.637878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.637909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.637938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.637965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.638984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.639674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.640980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.641008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.641039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.641069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.138 [2024-11-19 09:50:38.641130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.641966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.642827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.643980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.644009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.644041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.644072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.644101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.644135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.644170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.644208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.644240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.139 [2024-11-19 09:50:38.644270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.644996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.645993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.646668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.140 [2024-11-19 09:50:38.647931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.647961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.647991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.648975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.649842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.650973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.651000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.651027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.651065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.651095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.651123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.651150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.651176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.651208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.651236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.651266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.651297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.651329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.651360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.651393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.651425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.141 [2024-11-19 09:50:38.651455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.651993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.652978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.653718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.654990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.655028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.655057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.655092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.655121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.655151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.142 [2024-11-19 09:50:38.655189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.655997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.656983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.657998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.658028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.658056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.658086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.658112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.658144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.658172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.658206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.658238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.658275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.658314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.658344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.658371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.658400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.658430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.658456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.658484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.143 [2024-11-19 09:50:38.658515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.658550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.658584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.658616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.658645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.658675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.658708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.658740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.658769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.658802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.658832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.658863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.658893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.658955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.658985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.659980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.660853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 true 00:32:52.144 [2024-11-19 09:50:38.661423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.661454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.661485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.661515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.661545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.661574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.661603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.661632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.661657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.661690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.661724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.661754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.661786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.661820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.661859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.661889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.661919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.144 [2024-11-19 09:50:38.661952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.661981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.662999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.663991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.664999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.665032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.665061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.665091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.665125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.665155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.665192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.665227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.665261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.665292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.665323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.665350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.665381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.145 [2024-11-19 09:50:38.665408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.665995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.666981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:32:52.146 [2024-11-19 09:50:38.667703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.667924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.668984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.669022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.669051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.146 [2024-11-19 09:50:38.669085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.669974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.670992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.671979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.672011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.672045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.672076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.672104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.672135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.672176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.672208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.672235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.672261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.147 [2024-11-19 09:50:38.672293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.672320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.672362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.672392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.672421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.672451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.672482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.672517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.672546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:52.148 [2024-11-19 09:50:38.673041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:52.148 [2024-11-19 09:50:38.673303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.673973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.674978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.675008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.675056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.675642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.675676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.675710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.675747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.675779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.675812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.675852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.675883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.675911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.675939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.675968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.675997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.676026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.148 [2024-11-19 09:50:38.676062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.676985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.677986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.678992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.149 [2024-11-19 09:50:38.679614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.679645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.679678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.679713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.679747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.679775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.679802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.679827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.679860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.679892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.679923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.679954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.679982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.680984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.681997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.682991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.683022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.683054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.683083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.683114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.683145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.683189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.683220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.683252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.683279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.683311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.683336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.683361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.150 [2024-11-19 09:50:38.683386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.683994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.684970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.685968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.686001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.686031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.686066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.686097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.686128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.686170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.686203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.686233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.686268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.686299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.686330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.686363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.686392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.686421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.686452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.686488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.151 [2024-11-19 09:50:38.686517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.686548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.686581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.686609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.686650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.686681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.686712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.686743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.686774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.686806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.686836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.686892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.686923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.686952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.686982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.687975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.688978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.689007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.689038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.689071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.689099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.689131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.689171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.689204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.689238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.689269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.689301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.690036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.690067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.690096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.690126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.690163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.690192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.690225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.690257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.152 [2024-11-19 09:50:38.690288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.690994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.691948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.153 [2024-11-19 09:50:38.692944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.692987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.693980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.694987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.695980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.696012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.696044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.696075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.696107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.154 [2024-11-19 09:50:38.696138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.696172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.696201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.696233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.696264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.696294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.696328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.696359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.696394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.696429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.696464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.696493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.696521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.696551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.696583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.696614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.696644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.697981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.698991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.155 [2024-11-19 09:50:38.699780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.699813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.699844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.700982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.701984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.702994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.703024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.703052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.703083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.703108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.703140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.703178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.703208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.156 [2024-11-19 09:50:38.703241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.703981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.704010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.704038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.704066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.704096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.704129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.704165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.704194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.704225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.704254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.704288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.704318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.704361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.704809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.704844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.704874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.704904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:32:52.157 [2024-11-19 09:50:38.704937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.704968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.705970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.157 [2024-11-19 09:50:38.706804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.706829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.707976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.708010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.708038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.708067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.708097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.708132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.708165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.708194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.708218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.708249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.708279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.708314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.708344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.708375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.708404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.708434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.708985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.709986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.710017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.710049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.710081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.158 [2024-11-19 09:50:38.710110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.710942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.711985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.712990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.713024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.713055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.713088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.713118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.713149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.713186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.713216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.713250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.713427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.713460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.713495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.713526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.713557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.159 [2024-11-19 09:50:38.713588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.713626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.713662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.713692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.713722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.713752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.713782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.713814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.713844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.713876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.713909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.713944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.713975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.714974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.715959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.716401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.716435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.716465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.716496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.716524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.716553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.716580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.160 [2024-11-19 09:50:38.716609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.716638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.716671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.716699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.716727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.716768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.716801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.716833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.716862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.716892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.716919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.716954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.716994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.717979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.718985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.161 [2024-11-19 09:50:38.719740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.719769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.719800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.719825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.719849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.719873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.719898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.719929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.720495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.720527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.720556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.720588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.720624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.720655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.720684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.720713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.720748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.720779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.720812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.720843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.720872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.720903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.720932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.720962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.720993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.721976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.722994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.723024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.723051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.723360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.723389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.723413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.723448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.723480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.723518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.723555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.723582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.723609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.162 [2024-11-19 09:50:38.723637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.723671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.723703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.723734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.723764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.723793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.723825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.723852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.723884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.723912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.723948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.723979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.724995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.725966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.163 [2024-11-19 09:50:38.726772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.726807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.726840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.726868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.726898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.726931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.726968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.727608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.727642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.727672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.727703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.727734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.727767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.727801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.727833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.727884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.727924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.727956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.727991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.728979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.729980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.730011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.730044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.730071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.730101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.730134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.730171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.730204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.730235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.730517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.730555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.730586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.730615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.730642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.730675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.730708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.164 [2024-11-19 09:50:38.730739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.730771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.730814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.730844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.730875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.730904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.730934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.730971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.731986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.732974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.165 [2024-11-19 09:50:38.733740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.733770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.733805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.733836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.733873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.733909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.733938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.733965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.733993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.734031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.734059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.734087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.734116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.734143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.734733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.734769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.734800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.734829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.734857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.734886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.734919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.734949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.734978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.735993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.736974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.737004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.737036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.737062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.737092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.737122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.737153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.737188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.737221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.737255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.737287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.737315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.737346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.737376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.737450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.737491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.737524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.166 [2024-11-19 09:50:38.737554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.737584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.737615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.737649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.737678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.737715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.737743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.737783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.737819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.737850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.737883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.737918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.737950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.737982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.738921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.739972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.740998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.167 [2024-11-19 09:50:38.741030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.741964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:32:52.168 [2024-11-19 09:50:38.742423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.742456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.742494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.742525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.742557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.742602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.742633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.742665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.742695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.742723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.742756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.742784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.742814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.742848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.742881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.742909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.742937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.742968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.743997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.168 [2024-11-19 09:50:38.744706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.744733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.744771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.744801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.744848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.744880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.744910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.744945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.744977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.745978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.746011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.746545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.746577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.746607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.746639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.746669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.746700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.746729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.746754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.746791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.746823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.746857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.746888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.746922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.746952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.746982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.169 [2024-11-19 09:50:38.747726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.747756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.747784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.747816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.747847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.747879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.747909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.747940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.747970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.747998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.748972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.749993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.750984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.751015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.751047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.751077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.751107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.751164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.751198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.170 [2024-11-19 09:50:38.751228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.751997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.752997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.753027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.753057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.753102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.753614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.753645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.753676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.753706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.753736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.753768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.753805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.753835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.753865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.753892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.753919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.753952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.753981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.171 [2024-11-19 09:50:38.754843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.754873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.754905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.754937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.754971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.755972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.756975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.757747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.758086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.758122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.758154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.758193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.758223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.758258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.758285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.758314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.758344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.172 [2024-11-19 09:50:38.758374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.758989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.759985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.760010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.760041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.760753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.760786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.760816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.760853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.760887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.760919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.760950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.760979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.761971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.173 [2024-11-19 09:50:38.762002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.762968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.763981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.174 [2024-11-19 09:50:38.764816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.764847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.765649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.765687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.765717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.765755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.765787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.765817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.765849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.765878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.765913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.765943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.765975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.766973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.767976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.175 [2024-11-19 09:50:38.768719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.768749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.768778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.768810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.768842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.768874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.768935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.768965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.768996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.769777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.770972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.771977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.772007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.772035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.772064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.176 [2024-11-19 09:50:38.772105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.772984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.773999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.774029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.774058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.774091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.774121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.774154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.774189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.774223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.774252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.774282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.774310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.774340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.774375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.774408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.774440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.774992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.177 [2024-11-19 09:50:38.775926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.775957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.775995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.776957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.777002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.777218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.777246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.777281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.777311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.777350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.777378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.777408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.777438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.777467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.777494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.777528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.777566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.777596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.777625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.777651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.777680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.778993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.779023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.779053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.779092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.779123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.178 [2024-11-19 09:50:38.779154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.779972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:32:52.179 [2024-11-19 09:50:38.780437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.780969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.781643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.179 [2024-11-19 09:50:38.782656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.782690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.782722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.782755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.782785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.782814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.782850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.782880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.782909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.782939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.782970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.783987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.784112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.784145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.784184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.784214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.784244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.784276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.784309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.784341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.784373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.784404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.784435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.784469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.784499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.784529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.784563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.784595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.785993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.786021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.786057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.786087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.786116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.786147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.786179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.180 [2024-11-19 09:50:38.786207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.786971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.787988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.788650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.181 [2024-11-19 09:50:38.789927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.789963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.789995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.790982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.791811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.792978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.793011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.793042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.793075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.793103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.793131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.793170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.793201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.793233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.793266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.182 [2024-11-19 09:50:38.793297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.793330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.793361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.793391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.793421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.793453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.793488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.793521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.793551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.793750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.793783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.793819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.793851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.793880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.793917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.793956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.793987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.794984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.795998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.796029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.796060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.796092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.796124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.796156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.796192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.796223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.796252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.796280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.796314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.796343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.796376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.183 [2024-11-19 09:50:38.796409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.796438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.796875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.796910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.796942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.796975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.797981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.798895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.799988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.800017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.184 [2024-11-19 09:50:38.800049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.800080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.800110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.800141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.800178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.800207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.800239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.800269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.800300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.800331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.800361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.800387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.800418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.800447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.800476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.800512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.800547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.801996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.802974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.185 [2024-11-19 09:50:38.803822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.804973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.805977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.806983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.807012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.807041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.807072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.807102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.807128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.807162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.807194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.807223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.807252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.807292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.807321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.186 [2024-11-19 09:50:38.807351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.807996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.808033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.808062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.808090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.808121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.808150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.808187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.808239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.808270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.808301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.808340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.808373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.808404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.808860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.808894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.808922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.808982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.809984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.187 [2024-11-19 09:50:38.810824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.810857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.810886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.811987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.812943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.813990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.814020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.814049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.814087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.814121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.188 [2024-11-19 09:50:38.814152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.814989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.815982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.816987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.189 [2024-11-19 09:50:38.817666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.817700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.817729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.817759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.817790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.817820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.817870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.817903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:32:52.190 [2024-11-19 09:50:38.818168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.818993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.819997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.820984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.821017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.821045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.821074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.821106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.821136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.190 [2024-11-19 09:50:38.821174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.821932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.822059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.822092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.822125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.822157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.822200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.822229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.822273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.822315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.822348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.822381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.822413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.822443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.822474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.822505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.822537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.822601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.823980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.191 [2024-11-19 09:50:38.824801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.824832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.824866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.824895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.824932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.824964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.824995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.825988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.826575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.827988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.192 [2024-11-19 09:50:38.828020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.828996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.478 [2024-11-19 09:50:38.829665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:38.829696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:38.829728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:38.829761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:38.829793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:38.829824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:52.479 09:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:52.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:52.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:52.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:52.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:52.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:52.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:52.479 [2024-11-19 09:50:39.026641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.026683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.026714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.026740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.026772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.026804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.026834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.026891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.026919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.026949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.026979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.027990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.028654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.029227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.029264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.029295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.029325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.029351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.029382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.029409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.029436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.029461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.029494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.029523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.029551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.029579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.029605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.029633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.029659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.479 [2024-11-19 09:50:39.029688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.029717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.029745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.029774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.029810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.029843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.029871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.029900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.029928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.029960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.029990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.030991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.031973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.480 [2024-11-19 09:50:39.032960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.032987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.033870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.034991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.035996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.036025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.036058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.036088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.036118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.036148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.036185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.036216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.481 [2024-11-19 09:50:39.036250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.036973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.037975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.038847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.039204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.039266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.039296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.039331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.039365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.039393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.039422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.039450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.039480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.039504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.039532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.039566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.039595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.482 [2024-11-19 09:50:39.039622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.483 [2024-11-19 09:50:39.039655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.483 [2024-11-19 09:50:39.039684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.483 [2024-11-19 09:50:39.039714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.483 [2024-11-19 09:50:39.039744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.483 [2024-11-19 09:50:39.039769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.483 [2024-11-19 09:50:39.039798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.483 [2024-11-19 09:50:39.039827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.483 [2024-11-19 09:50:39.039856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.483 [2024-11-19 09:50:39.039879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.039908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.039938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.039968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.039998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.040983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 Message suppressed 999 times: [2024-11-19 09:50:39.041559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 Read completed with error (sct=0, sc=15) 00:32:52.484 [2024-11-19 09:50:39.041583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.041976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.042976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.043004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.484 [2024-11-19 09:50:39.043036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.043065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.043096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.043126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.043153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.043186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.043215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.043244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.043273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.043848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.043889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.043927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.043961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.043995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.044981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.045979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.485 [2024-11-19 09:50:39.046851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.046882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.046913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.046943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.046972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.047993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.048982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.486 [2024-11-19 09:50:39.049930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.049961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.049987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.050982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.051968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.052995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.053023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.053053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.053081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.053110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.053558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.053590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.487 [2024-11-19 09:50:39.053621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.053656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.053695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.053735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.053774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.053802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.053830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.053858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.053887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.053913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.053942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.053970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.053999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.054983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.055973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.056004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.056034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.056065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.056091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.056120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.056148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.056178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.056205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.056228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.056252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.056276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.056300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.056328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.056361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.056392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.488 [2024-11-19 09:50:39.056422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.056993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.057977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.058981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.489 [2024-11-19 09:50:39.059834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.059869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.059907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.059941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.059974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.060992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.061698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.062983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.063013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.063046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.063075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.063106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.063137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.063174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.063204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.063237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.063270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.063298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.063343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.063372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 [2024-11-19 09:50:39.063402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.490 09:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:32:52.491 [2024-11-19 09:50:39.063431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.063463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.063495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.063523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.063553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.063582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.063611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.063643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.063671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.063708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.063738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.063766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.063801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 09:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:32:52.491 [2024-11-19 09:50:39.063831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.063873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.063902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.063932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.063961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.063991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.064631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.065974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.491 [2024-11-19 09:50:39.066728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.066760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.066792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.066831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.066868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.066896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.066924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.066948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.066981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.067986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.068997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.069983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.070010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.070039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.070071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.070100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.070128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.070156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.070194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.070226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.070257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.070285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.070314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.492 [2024-11-19 09:50:39.070344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.070987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.071989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.072996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.073024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.073051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.073082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.073118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.073145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.073178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.073205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.073239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.073267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.073294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.073323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.073348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.493 [2024-11-19 09:50:39.073376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.073993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.074980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.075998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.076026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.076056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.076083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.076503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.076534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.076563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.076592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.076621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.076649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.076674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.076705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.076734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.076766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 [2024-11-19 09:50:39.076797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.494 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:32:52.494 [2024-11-19 09:50:39.076826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.076862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.076890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.076924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.076954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.076983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.077973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.078982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.079988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.080015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.080042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.080070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.080099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.080127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.080157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.080187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.080212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.495 [2024-11-19 09:50:39.080243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.080993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.081981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.082816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.083215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.083251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.083281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.083314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.083344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.083376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.083406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.083435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.083466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.083496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.083525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.083553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.083581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.496 [2024-11-19 09:50:39.083609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.083639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.083668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.083699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.083731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.083763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.083792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.083826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.083857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.083887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.083919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.083948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.083978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.084991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.085763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.086213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.086247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.086282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.086318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.086357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.086393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.086431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.086463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.086492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.086524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.086551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.086578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.086607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.086641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.497 [2024-11-19 09:50:39.086670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.086703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.086734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.086762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.086793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.086823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.086852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.086880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.086910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.086940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.086969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.086995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.087992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.088981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.089686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.090211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.498 [2024-11-19 09:50:39.090249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.090979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.091976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.092767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.499 [2024-11-19 09:50:39.093752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.093776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.093807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.093842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.093871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.093899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.093927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.093955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.093985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.094992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.095983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.096567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.097010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.097046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.097077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.097106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.500 [2024-11-19 09:50:39.097134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.097998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.098913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.099041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.099090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.099124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.099151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.099188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.099217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.099252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.099281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.099312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.099341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.099371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.099400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.099433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.099468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.099496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.099526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.099557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.501 [2024-11-19 09:50:39.100643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.100672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.100703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.100733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.100766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.100793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.100822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.100849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.100877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.100905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.100939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.100969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.100998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.101950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.502 [2024-11-19 09:50:39.102916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.102946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.102976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.103994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.104967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.105992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.106024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.106054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.106088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.106123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.106151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.106217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.106247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.106278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.106307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.106339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.503 [2024-11-19 09:50:39.106373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.106405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.106434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.106867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.106899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.106928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.106953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.106981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.107966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.108982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.504 [2024-11-19 09:50:39.109586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.109619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.109652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.109689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.109723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.109752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.109784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.109812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.109846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.109878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.109909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.109944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.109977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.110989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.111977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.112974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.113011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.113040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.113101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.113134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.113167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.505 [2024-11-19 09:50:39.113197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:32:52.506 [2024-11-19 09:50:39.113226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.113257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.113292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.113323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.113354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.113384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.113413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.113448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.113892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.113927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.113958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.113988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.114982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.115876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.116007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.116039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.116074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.116111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.116143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.116177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.116212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.116249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.116281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.116312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.506 [2024-11-19 09:50:39.116341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.116976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.117997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.118992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.119024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.119054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.119086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.119120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.119150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.119185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.119215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.119245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.119273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.119301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.119353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.119387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.507 [2024-11-19 09:50:39.119419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.119450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.119480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.119510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.119541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.119575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.119609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.119638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.119669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.119701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.119736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.119769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.119798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.119924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.119963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.119994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.120997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.121980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.508 [2024-11-19 09:50:39.122810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.122841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.122871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.122998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.123993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.124988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.125992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.126024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.126055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.509 [2024-11-19 09:50:39.126087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.126991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.127972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.128995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.510 [2024-11-19 09:50:39.129620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.129650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.129678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.129710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.129742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.129769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.129800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.129829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.129860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.129983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.130970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.131976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.511 [2024-11-19 09:50:39.132952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.132981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.133987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.134983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.135993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.136024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.136055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.512 [2024-11-19 09:50:39.136450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.136484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.136512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.136540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.136583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.136622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.136652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.136679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.136714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.136745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.136777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.136808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.136837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.136874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.136903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.136932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.136959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.136988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.137998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.138993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.513 [2024-11-19 09:50:39.139915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.139950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.139984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.140825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.141976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.142993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.143025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.143056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.143087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.143118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.143147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.143183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.143212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.143245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.143274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.143402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.143431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.143463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.143496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.514 [2024-11-19 09:50:39.143528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.143558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.143587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.143617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.143650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.143684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.143712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.143745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.143779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.143811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.143839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.143864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.143898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.144991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.145997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.146973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.147002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.147032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.147063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.515 [2024-11-19 09:50:39.147094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.147986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.148014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.148045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.148075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.148108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.148139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.148180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.148210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.148242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.148272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.148302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.148337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.148366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.148395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.148425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.148455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.148490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.148521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:32:52.516 [2024-11-19 09:50:39.149478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.149985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.150011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.150054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.150087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.150117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.150147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.150182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.150211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.150245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.150274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.150305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.150344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.516 [2024-11-19 09:50:39.150375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.150996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.151987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.152995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.153025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.153056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.153087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.153119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.153147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.153183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.153214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.153782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.153818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.153850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.153886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.153922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.153953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.153986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.154011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.154041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.154075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.154103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.517 [2024-11-19 09:50:39.154134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.154977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.155991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.156993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.157025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.157058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.157090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.157124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.157149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.157186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.157216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.157247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.157278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.157308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.157343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.157378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.157410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.518 [2024-11-19 09:50:39.157441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.157472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.157505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.157543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.157574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.157617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.157646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.157673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.157704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.157735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.157766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.157795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.157827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.157858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.157903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.157934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.157965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.158997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.159970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.160004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.160036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.160066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.160095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.160124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.160153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.160187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.160217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.160243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.160274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.160306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.160337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.160368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.160772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.160805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.160836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.519 [2024-11-19 09:50:39.160867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.160901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.160931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.160961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.160996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.161989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.162982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.163973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.164002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.164031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.164060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.164091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.164124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.164161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.164191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.164237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.164270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.164300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.164331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.164363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.520 [2024-11-19 09:50:39.164400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.164992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.165862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.166983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.167017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.167048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.167082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.167114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.167146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.167176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.167208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.167236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.167270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.521 [2024-11-19 09:50:39.167295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.167326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.167355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.167388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.167426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.167457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.167490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.167883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.167917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.167950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.167982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.168985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.169878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.170999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.171032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.171064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.171094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.171129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.171161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.171192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.171233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.171264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.171297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.171326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.171357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.171387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.171420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.171449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.171480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.522 [2024-11-19 09:50:39.171509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.171541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.171570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.171599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.171632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.171661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.171700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.171729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.171759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.171819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.171849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.171878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.171914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.171947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.171978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.172975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.173980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.174989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.175019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.175048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.175076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.523 [2024-11-19 09:50:39.175109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.175139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.175176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.175207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.175613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.175644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.175673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.175707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.175740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.175770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.175806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.175840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.175871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.175913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.175945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.175976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.176993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.177651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.178232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.178267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.178298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.178328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.178370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.524 [2024-11-19 09:50:39.178402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.178979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.179980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.180992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.181026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.181058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.181089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.181134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.181168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.181201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.525 [2024-11-19 09:50:39.181229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.181997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.182028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.182054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.182084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.182114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.182150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.182184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.182216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.182250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.182279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.182313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.182341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.182370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.182400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.182432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.182997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.183985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.526 [2024-11-19 09:50:39.184861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.184892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.184922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.184952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.184983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.185985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.186985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.187989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.188022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:32:52.527 [2024-11-19 09:50:39.188051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.188080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.188109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.188137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.188175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.188204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.188234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.188271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.527 [2024-11-19 09:50:39.188303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.188332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.188361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.188387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.188419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.188775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.188807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.188836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.188873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.188914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.188946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.188976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.189988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.190964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.528 [2024-11-19 09:50:39.191816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.191848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.191879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.191911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.191943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.191975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.192917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.193474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.193509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.193542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.193573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.193605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.193634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.193687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.193718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.193748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.193778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.193808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.193842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.193873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.193909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.193946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.193983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.194973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.195012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.195042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.195071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.195100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.195129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.195164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.195195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.195234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.195265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.195292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.195322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.195361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.195393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.195424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.195453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.195493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.529 [2024-11-19 09:50:39.195519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.195546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.195680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.195710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.195738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.195770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.195799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.195830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.195865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.195893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.195925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.195954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.195983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.196982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.197015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.197045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.197076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.197109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.197136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.197172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.197201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.197228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.197252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.197276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.197307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.197338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.197368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.197402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.530 [2024-11-19 09:50:39.197434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.197992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.198986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.199985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.200983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.201021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.201053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.201080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.201110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.201137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.201172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.201202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.201232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.201258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.201287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.201322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.201350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.803 [2024-11-19 09:50:39.201387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.201994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.202962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.203989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.804 [2024-11-19 09:50:39.204963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.204994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.205974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.206969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.207005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.207599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.207631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.207668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.207699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.207730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.207767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.207798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.207828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.207862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.207892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.207921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.207959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.207989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.805 [2024-11-19 09:50:39.208815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.208848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.208878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.208911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.208941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.208970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.208999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.209978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.210985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.211972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.212001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.212031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.212059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.212089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.212119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.212149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.212187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.212225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.212260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.212294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.212320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.212349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.212381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.212410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.806 [2024-11-19 09:50:39.212440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.212471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.212502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.212529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.212561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.212591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.212624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.212658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.212687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.212717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.212747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.212778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.212912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.212943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.213978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.214985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.807 [2024-11-19 09:50:39.215618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.215654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.215687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.215716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.215745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.215784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.215816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.215844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.215877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.215908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.215939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.215972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 true 00:32:52.808 [2024-11-19 09:50:39.216098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.216987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.217990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.218973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.219007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.219038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.219071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.219099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.219145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.219180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.219211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.219241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.808 [2024-11-19 09:50:39.219269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.219890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.220987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.221960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.809 [2024-11-19 09:50:39.222750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.222782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.222810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.222838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:32:52.810 [2024-11-19 09:50:39.223468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.223986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.224988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.225977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.226005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.226042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.226072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.226097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.226130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.226169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.226198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.226237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.226265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.226293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.226325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.226353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.226380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.810 [2024-11-19 09:50:39.226407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.226435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.226468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.226496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.226522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.226553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.226582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.226619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.226650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.226680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.226710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.226742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.226772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.226800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.226830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.226861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.226893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.227034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.227063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.227093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.227119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.227147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.227182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.227213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.227244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.227304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.227334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.227370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.227402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.227433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.227462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.227491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.227519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.227551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.228978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.811 [2024-11-19 09:50:39.229672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.229703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.229732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.229762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.229793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.229822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.229852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.229880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.229910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.230976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.231994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.232026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.232056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.232085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.232115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.232720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.232753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.232784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.232816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.232848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.232877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.232915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.232944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.232973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.233004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.233033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.233066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.233095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.233124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.233154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.233188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.233219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.233249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.233279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.233313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.233341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.233387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.233416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.233457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.812 [2024-11-19 09:50:39.233490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.233518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.233546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.233577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.233608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.233638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.233666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.233697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.233727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.233755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.233790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.233820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.233851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.233882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.233909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.233939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.233972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.234987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.235993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.813 [2024-11-19 09:50:39.236576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.236605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.236635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.236667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.236698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.236730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.236766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.237955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.238983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.239987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.240018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.240045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.240078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.240110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.240142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.240179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.240215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.240240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.240274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.240303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.240332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.814 [2024-11-19 09:50:39.240368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.240969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.241001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.241030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.241062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.241092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.241125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.241155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.241191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.241219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.241251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.241280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.241305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.241329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.241353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.241377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.241401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.241955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.241988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.242972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.815 [2024-11-19 09:50:39.243911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 09:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:52.816 [2024-11-19 09:50:39.244050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 09:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:52.816 [2024-11-19 09:50:39.244345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.244985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.245991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.246987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.247022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.816 [2024-11-19 09:50:39.247051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.247971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.248994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.249995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.250029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.250059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.250086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.250114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.250145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.250182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.250213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.250247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.250280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.250313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.817 [2024-11-19 09:50:39.250347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.250378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.250414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.250445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.250473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.250511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.250540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.250587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.250943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.250976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.251972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.252814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.818 [2024-11-19 09:50:39.253949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.253980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.254985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.255017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.255062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.255093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.255684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.255716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.255747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.255779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.255810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.255840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.255894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.255927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.255966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.255997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.256985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.819 [2024-11-19 09:50:39.257658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.257687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.257715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.257853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.257886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.257914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.257942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.257976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.258976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.259983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.260015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.260044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.260075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 [2024-11-19 09:50:39.260121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:32:52.820 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:32:53.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.761 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:53.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.761 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:32:53.761 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:32:54.028 true 00:32:54.028 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:54.028 09:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:55.028 09:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:55.028 09:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:32:55.028 09:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:32:55.301 true 00:32:55.301 09:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:55.301 09:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:55.584 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:55.584 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:32:55.584 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:32:55.880 true 00:32:55.880 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:55.880 09:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:56.878 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:56.878 Initializing NVMe Controllers 00:32:56.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:56.878 Controller IO queue size 128, less than required. 00:32:56.878 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:56.878 Controller IO queue size 128, less than required. 00:32:56.878 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:56.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:56.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:56.878 Initialization complete. Launching workers. 00:32:56.878 ======================================================== 00:32:56.878 Latency(us) 00:32:56.878 Device Information : IOPS MiB/s Average min max 00:32:56.878 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3053.87 1.49 26704.64 1626.25 1046033.29 00:32:56.878 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18405.60 8.99 6954.61 1129.50 336171.97 00:32:56.878 ======================================================== 00:32:56.878 Total : 21459.47 10.48 9765.21 1129.50 1046033.29 00:32:56.878 00:32:57.164 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:32:57.164 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:32:57.164 true 00:32:57.164 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 539687 00:32:57.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (539687) - No such process 00:32:57.164 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 539687 00:32:57.164 09:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:57.439 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:57.722 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:32:57.722 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:32:57.722 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:32:57.722 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:57.722 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:32:57.722 null0 00:32:57.722 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:57.722 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:57.722 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:32:58.003 null1 00:32:58.003 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:58.003 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:58.003 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:32:58.003 null2 00:32:58.297 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:58.297 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:58.297 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:32:58.297 null3 00:32:58.297 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:58.297 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:58.297 09:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:32:58.594 null4 00:32:58.594 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:58.594 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:58.594 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:32:58.594 null5 00:32:58.594 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:58.594 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:58.594 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:32:58.899 null6 00:32:58.899 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:58.899 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:58.899 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:32:58.899 null7 00:32:58.899 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:58.899 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:58.899 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:32:58.899 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 546035 546037 546040 546044 546046 546049 546052 546054 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:58.900 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:59.186 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:59.186 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:59.186 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:59.186 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:59.186 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:59.186 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:59.186 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:59.186 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:59.186 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.186 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.186 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:59.448 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.448 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.448 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:59.448 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.448 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.448 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:59.448 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.448 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.448 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:59.448 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.448 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.448 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:59.448 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.448 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.448 09:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:59.448 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.448 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.448 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:59.448 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.448 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.448 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:59.448 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:59.448 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:59.448 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:59.448 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:59.448 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:59.709 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.710 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:59.971 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:59.971 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:59.971 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:59.971 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:59.971 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:59.971 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:59.971 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:59.971 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:59.971 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:59.971 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:00.233 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:00.495 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:00.495 09:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:00.495 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:00.495 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:00.496 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:00.757 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:00.757 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:00.757 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:00.757 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:00.758 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:00.758 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:00.758 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:00.758 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:00.758 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:00.758 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:00.758 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:00.758 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:00.758 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:01.019 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.282 09:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.543 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:01.806 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:02.068 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:02.330 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:02.330 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:02.330 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:02.330 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.330 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.331 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:02.331 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:02.331 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.331 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.331 09:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:02.331 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.331 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.331 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:02.331 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.331 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.331 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:02.331 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.331 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.331 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:02.331 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.331 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.593 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:02.593 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.593 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.593 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:02.593 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:02.593 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.593 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.593 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:02.593 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:02.593 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:02.593 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:02.593 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:02.593 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:02.593 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.593 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:02.864 rmmod nvme_tcp 00:33:02.864 rmmod nvme_fabrics 00:33:02.864 rmmod nvme_keyring 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 539274 ']' 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 539274 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 539274 ']' 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 539274 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 539274 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 539274' 00:33:02.864 killing process with pid 539274 00:33:02.864 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 539274 00:33:02.865 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 539274 00:33:03.131 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:03.131 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:03.131 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:03.131 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:33:03.131 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:33:03.131 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:03.131 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:33:03.131 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:03.131 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:03.131 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.131 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:03.131 09:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.047 09:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:05.047 00:33:05.047 real 0m49.092s 00:33:05.047 user 2m59.983s 00:33:05.047 sys 0m20.748s 00:33:05.047 09:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:05.047 09:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:05.047 ************************************ 00:33:05.047 END TEST nvmf_ns_hotplug_stress 00:33:05.047 ************************************ 00:33:05.309 09:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:05.309 09:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:05.309 09:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:05.309 09:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:05.309 ************************************ 00:33:05.309 START TEST nvmf_delete_subsystem 00:33:05.309 ************************************ 00:33:05.309 09:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:05.309 * Looking for test storage... 00:33:05.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:05.309 09:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:05.309 09:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:33:05.309 09:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:05.309 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:05.309 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:05.309 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:05.309 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:05.309 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:33:05.309 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:33:05.309 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:05.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.310 --rc genhtml_branch_coverage=1 00:33:05.310 --rc genhtml_function_coverage=1 00:33:05.310 --rc genhtml_legend=1 00:33:05.310 --rc geninfo_all_blocks=1 00:33:05.310 --rc geninfo_unexecuted_blocks=1 00:33:05.310 00:33:05.310 ' 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:05.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.310 --rc genhtml_branch_coverage=1 00:33:05.310 --rc genhtml_function_coverage=1 00:33:05.310 --rc genhtml_legend=1 00:33:05.310 --rc geninfo_all_blocks=1 00:33:05.310 --rc geninfo_unexecuted_blocks=1 00:33:05.310 00:33:05.310 ' 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:05.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.310 --rc genhtml_branch_coverage=1 00:33:05.310 --rc genhtml_function_coverage=1 00:33:05.310 --rc genhtml_legend=1 00:33:05.310 --rc geninfo_all_blocks=1 00:33:05.310 --rc geninfo_unexecuted_blocks=1 00:33:05.310 00:33:05.310 ' 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:05.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.310 --rc genhtml_branch_coverage=1 00:33:05.310 --rc genhtml_function_coverage=1 00:33:05.310 --rc genhtml_legend=1 00:33:05.310 --rc geninfo_all_blocks=1 00:33:05.310 --rc geninfo_unexecuted_blocks=1 00:33:05.310 00:33:05.310 ' 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:05.310 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:05.571 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:05.571 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:05.571 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:05.571 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:05.571 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:05.571 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:33:05.572 09:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:13.717 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:13.718 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:13.718 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:13.718 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:13.718 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:13.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:13.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:33:13.718 00:33:13.718 --- 10.0.0.2 ping statistics --- 00:33:13.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.718 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:13.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:13.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:33:13.718 00:33:13.718 --- 10.0.0.1 ping statistics --- 00:33:13.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.718 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:13.718 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:13.719 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:13.719 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:13.719 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:13.719 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:13.719 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:33:13.719 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:13.719 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:13.719 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:13.719 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=550951 00:33:13.719 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 550951 00:33:13.719 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:13.719 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 550951 ']' 00:33:13.719 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.719 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:13.719 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.719 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:13.719 09:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:13.719 [2024-11-19 09:50:59.564035] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:13.719 [2024-11-19 09:50:59.565137] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:13.719 [2024-11-19 09:50:59.565203] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:13.719 [2024-11-19 09:50:59.663645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:13.719 [2024-11-19 09:50:59.715984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:13.719 [2024-11-19 09:50:59.716035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:13.719 [2024-11-19 09:50:59.716044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:13.719 [2024-11-19 09:50:59.716051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:13.719 [2024-11-19 09:50:59.716058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:13.719 [2024-11-19 09:50:59.717689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.719 [2024-11-19 09:50:59.717694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.719 [2024-11-19 09:50:59.793981] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:13.719 [2024-11-19 09:50:59.794583] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:13.719 [2024-11-19 09:50:59.794882] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:13.719 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:13.719 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:33:13.719 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:13.719 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:13.719 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:13.719 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:13.719 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:13.719 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.719 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:13.719 [2024-11-19 09:51:00.430725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.719 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.719 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:13.719 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.719 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:13.719 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.719 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:13.719 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.719 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:13.980 [2024-11-19 09:51:00.463121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:13.980 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.981 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:33:13.981 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.981 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:13.981 NULL1 00:33:13.981 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.981 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:13.981 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.981 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:13.981 Delay0 00:33:13.981 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.981 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:13.981 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.981 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:13.981 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.981 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=551312 00:33:13.981 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:33:13.981 09:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:13.981 [2024-11-19 09:51:00.587044] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:15.898 09:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:15.898 09:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.898 09:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 [2024-11-19 09:51:02.834857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd680 is same with the state(6) to be set 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 starting I/O failed: -6 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 [2024-11-19 09:51:02.839383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f18e000d020 is same with the state(6) to be set 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.239 Read completed with error (sct=0, sc=8) 00:33:16.239 Write completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Write completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Write completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Write completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Write completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Write completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:16.240 Read completed with error (sct=0, sc=8) 00:33:17.182 [2024-11-19 09:51:03.811521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afe9a0 is same with the state(6) to be set 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 [2024-11-19 09:51:03.839086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd860 is same with the state(6) to be set 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 [2024-11-19 09:51:03.839194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd4a0 is same with the state(6) to be set 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Write completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.182 Read completed with error (sct=0, sc=8) 00:33:17.183 Read completed with error (sct=0, sc=8) 00:33:17.183 Read completed with error (sct=0, sc=8) 00:33:17.183 Read completed with error (sct=0, sc=8) 00:33:17.183 [2024-11-19 09:51:03.840023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f18e000d350 is same with the state(6) to be set 00:33:17.183 Read completed with error (sct=0, sc=8) 00:33:17.183 Read completed with error (sct=0, sc=8) 00:33:17.183 Write completed with error (sct=0, sc=8) 00:33:17.183 Write completed with error (sct=0, sc=8) 00:33:17.183 Write completed with error (sct=0, sc=8) 00:33:17.183 Read completed with error (sct=0, sc=8) 00:33:17.183 Write completed with error (sct=0, sc=8) 00:33:17.183 Read completed with error (sct=0, sc=8) 00:33:17.183 Read completed with error (sct=0, sc=8) 00:33:17.183 Write completed with error (sct=0, sc=8) 00:33:17.183 Read completed with error (sct=0, sc=8) 00:33:17.183 Read completed with error (sct=0, sc=8) 00:33:17.183 Read completed with error (sct=0, sc=8) 00:33:17.183 Write completed with error (sct=0, sc=8) 00:33:17.183 [2024-11-19 09:51:03.840333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f18e0000c40 is same with the state(6) to be set 00:33:17.183 Initializing NVMe Controllers 00:33:17.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:17.183 Controller IO queue size 128, less than required. 00:33:17.183 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:17.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:17.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:17.183 Initialization complete. Launching workers. 00:33:17.183 ======================================================== 00:33:17.183 Latency(us) 00:33:17.183 Device Information : IOPS MiB/s Average min max 00:33:17.183 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 175.18 0.09 884373.21 398.75 1007164.73 00:33:17.183 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.79 0.07 962318.65 316.27 2000882.51 00:33:17.183 ======================================================== 00:33:17.183 Total : 325.97 0.16 920430.41 316.27 2000882.51 00:33:17.183 00:33:17.183 [2024-11-19 09:51:03.840778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afe9a0 (9): Bad file descriptor 00:33:17.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:33:17.183 09:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.183 09:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:33:17.183 09:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 551312 00:33:17.183 09:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:33:17.754 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 551312 00:33:17.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (551312) - No such process 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 551312 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 551312 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 551312 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:17.755 [2024-11-19 09:51:04.375003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=552066 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 552066 00:33:17.755 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:17.755 [2024-11-19 09:51:04.478543] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:18.327 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:18.327 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 552066 00:33:18.327 09:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:18.900 09:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:18.900 09:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 552066 00:33:18.900 09:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:19.471 09:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:19.471 09:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 552066 00:33:19.472 09:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:19.732 09:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:19.732 09:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 552066 00:33:19.732 09:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:20.306 09:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:20.306 09:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 552066 00:33:20.306 09:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:20.877 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:20.877 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 552066 00:33:20.877 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:21.139 Initializing NVMe Controllers 00:33:21.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:21.139 Controller IO queue size 128, less than required. 00:33:21.139 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:21.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:21.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:21.139 Initialization complete. Launching workers. 00:33:21.139 ======================================================== 00:33:21.139 Latency(us) 00:33:21.139 Device Information : IOPS MiB/s Average min max 00:33:21.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002334.39 1000115.53 1005833.66 00:33:21.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004073.75 1000216.99 1011226.45 00:33:21.139 ======================================================== 00:33:21.139 Total : 256.00 0.12 1003204.07 1000115.53 1011226.45 00:33:21.139 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 552066 00:33:21.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (552066) - No such process 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 552066 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:21.402 rmmod nvme_tcp 00:33:21.402 rmmod nvme_fabrics 00:33:21.402 rmmod nvme_keyring 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 550951 ']' 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 550951 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 550951 ']' 00:33:21.402 09:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 550951 00:33:21.402 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:33:21.402 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:21.402 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 550951 00:33:21.402 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:21.402 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:21.402 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 550951' 00:33:21.402 killing process with pid 550951 00:33:21.402 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 550951 00:33:21.402 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 550951 00:33:21.664 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:21.664 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:21.664 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:21.664 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:33:21.664 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:33:21.664 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:21.664 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:33:21.664 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:21.664 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:21.664 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.664 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:21.664 09:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.582 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:23.582 00:33:23.582 real 0m18.423s 00:33:23.582 user 0m26.823s 00:33:23.582 sys 0m7.516s 00:33:23.582 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:23.582 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:23.582 ************************************ 00:33:23.582 END TEST nvmf_delete_subsystem 00:33:23.582 ************************************ 00:33:23.582 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:23.582 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:23.582 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:23.582 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:23.845 ************************************ 00:33:23.845 START TEST nvmf_host_management 00:33:23.845 ************************************ 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:23.845 * Looking for test storage... 00:33:23.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:23.845 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:23.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.846 --rc genhtml_branch_coverage=1 00:33:23.846 --rc genhtml_function_coverage=1 00:33:23.846 --rc genhtml_legend=1 00:33:23.846 --rc geninfo_all_blocks=1 00:33:23.846 --rc geninfo_unexecuted_blocks=1 00:33:23.846 00:33:23.846 ' 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:23.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.846 --rc genhtml_branch_coverage=1 00:33:23.846 --rc genhtml_function_coverage=1 00:33:23.846 --rc genhtml_legend=1 00:33:23.846 --rc geninfo_all_blocks=1 00:33:23.846 --rc geninfo_unexecuted_blocks=1 00:33:23.846 00:33:23.846 ' 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:23.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.846 --rc genhtml_branch_coverage=1 00:33:23.846 --rc genhtml_function_coverage=1 00:33:23.846 --rc genhtml_legend=1 00:33:23.846 --rc geninfo_all_blocks=1 00:33:23.846 --rc geninfo_unexecuted_blocks=1 00:33:23.846 00:33:23.846 ' 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:23.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.846 --rc genhtml_branch_coverage=1 00:33:23.846 --rc genhtml_function_coverage=1 00:33:23.846 --rc genhtml_legend=1 00:33:23.846 --rc geninfo_all_blocks=1 00:33:23.846 --rc geninfo_unexecuted_blocks=1 00:33:23.846 00:33:23.846 ' 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:33:23.846 09:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:31.995 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:31.995 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:31.995 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:31.995 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:31.995 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:31.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:31.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:33:31.996 00:33:31.996 --- 10.0.0.2 ping statistics --- 00:33:31.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.996 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:31.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:31.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:33:31.996 00:33:31.996 --- 10.0.0.1 ping statistics --- 00:33:31.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.996 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:31.996 09:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:31.996 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:33:31.996 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:33:31.996 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:33:31.996 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:31.996 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:31.996 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:31.996 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=557343 00:33:31.996 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 557343 00:33:31.996 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:33:31.996 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 557343 ']' 00:33:31.996 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.996 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:31.996 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.996 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:31.996 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:31.996 [2024-11-19 09:51:18.103175] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:31.996 [2024-11-19 09:51:18.104310] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:31.996 [2024-11-19 09:51:18.104360] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.996 [2024-11-19 09:51:18.205202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:31.996 [2024-11-19 09:51:18.260030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.996 [2024-11-19 09:51:18.260079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.996 [2024-11-19 09:51:18.260087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.996 [2024-11-19 09:51:18.260094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.996 [2024-11-19 09:51:18.260100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.996 [2024-11-19 09:51:18.262032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:31.996 [2024-11-19 09:51:18.262212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:31.996 [2024-11-19 09:51:18.262439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.996 [2024-11-19 09:51:18.262439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:31.996 [2024-11-19 09:51:18.340876] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:31.996 [2024-11-19 09:51:18.341963] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:31.996 [2024-11-19 09:51:18.342134] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:31.996 [2024-11-19 09:51:18.342805] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:31.996 [2024-11-19 09:51:18.342834] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:32.258 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.258 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:33:32.258 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:32.258 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:32.258 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:32.258 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:32.258 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:32.258 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.258 09:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:32.258 [2024-11-19 09:51:18.971303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:32.520 Malloc0 00:33:32.520 [2024-11-19 09:51:19.075584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=557569 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 557569 /var/tmp/bdevperf.sock 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 557569 ']' 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:32.520 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:32.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:32.521 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:33:32.521 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:33:32.521 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:32.521 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:32.521 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:33:32.521 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:33:32.521 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:32.521 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:32.521 { 00:33:32.521 "params": { 00:33:32.521 "name": "Nvme$subsystem", 00:33:32.521 "trtype": "$TEST_TRANSPORT", 00:33:32.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:32.521 "adrfam": "ipv4", 00:33:32.521 "trsvcid": "$NVMF_PORT", 00:33:32.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:32.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:32.521 "hdgst": ${hdgst:-false}, 00:33:32.521 "ddgst": ${ddgst:-false} 00:33:32.521 }, 00:33:32.521 "method": "bdev_nvme_attach_controller" 00:33:32.521 } 00:33:32.521 EOF 00:33:32.521 )") 00:33:32.521 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:33:32.521 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:33:32.521 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:33:32.521 09:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:32.521 "params": { 00:33:32.521 "name": "Nvme0", 00:33:32.521 "trtype": "tcp", 00:33:32.521 "traddr": "10.0.0.2", 00:33:32.521 "adrfam": "ipv4", 00:33:32.521 "trsvcid": "4420", 00:33:32.521 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:32.521 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:32.521 "hdgst": false, 00:33:32.521 "ddgst": false 00:33:32.521 }, 00:33:32.521 "method": "bdev_nvme_attach_controller" 00:33:32.521 }' 00:33:32.521 [2024-11-19 09:51:19.184975] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:32.521 [2024-11-19 09:51:19.185045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid557569 ] 00:33:32.783 [2024-11-19 09:51:19.278490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.783 [2024-11-19 09:51:19.332071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.045 Running I/O for 10 seconds... 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.306 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:33.570 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.570 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:33:33.570 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:33:33.570 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:33:33.570 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:33:33.570 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:33:33.570 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:33.570 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.570 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:33.570 [2024-11-19 09:51:20.089599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.570 [2024-11-19 09:51:20.089663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.570 [2024-11-19 09:51:20.089672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.570 [2024-11-19 09:51:20.089680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.570 [2024-11-19 09:51:20.089688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.570 [2024-11-19 09:51:20.089696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.570 [2024-11-19 09:51:20.089704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.570 [2024-11-19 09:51:20.089712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.570 [2024-11-19 09:51:20.089719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with [2024-11-19 09:51:20.089795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(6) to be set 00:33:33.571 id:0 cdw10:00000000 cdw11:00000000 00:33:33.571 [2024-11-19 09:51:20.089857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.571 [2024-11-19 09:51:20.089882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.571 [2024-11-19 09:51:20.089891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.571 [2024-11-19 09:51:20.089900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-11-19 09:51:20.089911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with id:0 cdw10:00000000 cdw11:00000000 00:33:33.571 the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-19 09:51:20.089925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.571 the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.571 [2024-11-19 09:51:20.089948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.571 [2024-11-19 09:51:20.089958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aed000 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.089993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26342a0 is same with the state(6) to be set 00:33:33.571 [2024-11-19 09:51:20.090662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.571 [2024-11-19 09:51:20.090685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.571 [2024-11-19 09:51:20.090705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.571 [2024-11-19 09:51:20.090714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.571 [2024-11-19 09:51:20.090732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.571 [2024-11-19 09:51:20.090740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.571 [2024-11-19 09:51:20.090750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.571 [2024-11-19 09:51:20.090758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.571 [2024-11-19 09:51:20.090767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.571 [2024-11-19 09:51:20.090775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.571 [2024-11-19 09:51:20.090785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.571 [2024-11-19 09:51:20.090793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.571 [2024-11-19 09:51:20.090802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.571 [2024-11-19 09:51:20.090810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.571 [2024-11-19 09:51:20.090820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.571 [2024-11-19 09:51:20.090827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.571 [2024-11-19 09:51:20.090837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.571 [2024-11-19 09:51:20.090845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.090855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.090863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.090873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.090881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.090891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.090899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.090908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.090916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.090927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.090934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.090944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.090953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.090963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.090970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.090980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.090987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.090997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.572 [2024-11-19 09:51:20.091541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.572 [2024-11-19 09:51:20.091550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.573 [2024-11-19 09:51:20.091558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.573 [2024-11-19 09:51:20.091567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.573 [2024-11-19 09:51:20.091575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.573 [2024-11-19 09:51:20.091584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.573 [2024-11-19 09:51:20.091592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.573 [2024-11-19 09:51:20.091602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.573 [2024-11-19 09:51:20.091609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.573 [2024-11-19 09:51:20.091621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.573 [2024-11-19 09:51:20.091629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.573 [2024-11-19 09:51:20.091639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.573 [2024-11-19 09:51:20.091646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.573 [2024-11-19 09:51:20.091655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.573 [2024-11-19 09:51:20.091662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.573 [2024-11-19 09:51:20.091673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.573 [2024-11-19 09:51:20.091680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.573 [2024-11-19 09:51:20.091689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.573 [2024-11-19 09:51:20.091697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.573 [2024-11-19 09:51:20.091706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.573 [2024-11-19 09:51:20.091715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.573 [2024-11-19 09:51:20.091726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.573 [2024-11-19 09:51:20.091733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.573 [2024-11-19 09:51:20.091746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.573 [2024-11-19 09:51:20.091753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.573 [2024-11-19 09:51:20.091763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.573 [2024-11-19 09:51:20.091771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.573 [2024-11-19 09:51:20.091782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.573 [2024-11-19 09:51:20.091789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.573 [2024-11-19 09:51:20.091799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.573 [2024-11-19 09:51:20.091806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.573 [2024-11-19 09:51:20.091815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.573 [2024-11-19 09:51:20.091823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.573 [2024-11-19 09:51:20.091832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d06190 is same with the state(6) to be set 00:33:33.573 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.573 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:33.573 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.573 [2024-11-19 09:51:20.093150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:33.573 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:33.573 task offset: 81920 on job bdev=Nvme0n1 fails 00:33:33.573 00:33:33.573 Latency(us) 00:33:33.573 [2024-11-19T08:51:20.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.573 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:33.573 Job: Nvme0n1 ended in about 0.47 seconds with error 00:33:33.573 Verification LBA range: start 0x0 length 0x400 00:33:33.573 Nvme0n1 : 0.47 1359.37 84.96 135.94 0.00 41632.21 5789.01 37573.97 00:33:33.573 [2024-11-19T08:51:20.321Z] =================================================================================================================== 00:33:33.573 [2024-11-19T08:51:20.321Z] Total : 1359.37 84.96 135.94 0.00 41632.21 5789.01 37573.97 00:33:33.573 [2024-11-19 09:51:20.095407] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:33.573 [2024-11-19 09:51:20.095444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aed000 (9): Bad file descriptor 00:33:33.573 [2024-11-19 09:51:20.096920] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:33:33.573 [2024-11-19 09:51:20.097008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:33.573 [2024-11-19 09:51:20.097034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.573 [2024-11-19 09:51:20.097048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:33:33.573 [2024-11-19 09:51:20.097057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:33:33.573 [2024-11-19 09:51:20.097064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.573 [2024-11-19 09:51:20.097073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1aed000 00:33:33.573 [2024-11-19 09:51:20.097097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aed000 (9): Bad file descriptor 00:33:33.573 [2024-11-19 09:51:20.097112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:33.573 [2024-11-19 09:51:20.097120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:33.573 [2024-11-19 09:51:20.097130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:33.573 [2024-11-19 09:51:20.097140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:33.573 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.573 09:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:33:34.517 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 557569 00:33:34.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (557569) - No such process 00:33:34.517 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:33:34.517 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:33:34.517 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:33:34.517 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:33:34.517 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:33:34.517 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:33:34.517 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:34.517 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:34.517 { 00:33:34.517 "params": { 00:33:34.517 "name": "Nvme$subsystem", 00:33:34.517 "trtype": "$TEST_TRANSPORT", 00:33:34.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:34.517 "adrfam": "ipv4", 00:33:34.517 "trsvcid": "$NVMF_PORT", 00:33:34.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:34.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:34.517 "hdgst": ${hdgst:-false}, 00:33:34.517 "ddgst": ${ddgst:-false} 00:33:34.517 }, 00:33:34.517 "method": "bdev_nvme_attach_controller" 00:33:34.517 } 00:33:34.517 EOF 00:33:34.517 )") 00:33:34.517 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:33:34.517 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:33:34.517 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:33:34.517 09:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:34.517 "params": { 00:33:34.518 "name": "Nvme0", 00:33:34.518 "trtype": "tcp", 00:33:34.518 "traddr": "10.0.0.2", 00:33:34.518 "adrfam": "ipv4", 00:33:34.518 "trsvcid": "4420", 00:33:34.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:34.518 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:34.518 "hdgst": false, 00:33:34.518 "ddgst": false 00:33:34.518 }, 00:33:34.518 "method": "bdev_nvme_attach_controller" 00:33:34.518 }' 00:33:34.518 [2024-11-19 09:51:21.168848] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:34.518 [2024-11-19 09:51:21.168923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid557922 ] 00:33:34.518 [2024-11-19 09:51:21.262079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.779 [2024-11-19 09:51:21.313179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:35.041 Running I/O for 1 seconds... 00:33:35.984 1408.00 IOPS, 88.00 MiB/s 00:33:35.984 Latency(us) 00:33:35.984 [2024-11-19T08:51:22.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.984 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:35.984 Verification LBA range: start 0x0 length 0x400 00:33:35.984 Nvme0n1 : 1.02 1449.66 90.60 0.00 0.00 43401.44 7536.64 38884.69 00:33:35.984 [2024-11-19T08:51:22.732Z] =================================================================================================================== 00:33:35.984 [2024-11-19T08:51:22.732Z] Total : 1449.66 90.60 0.00 0.00 43401.44 7536.64 38884.69 00:33:36.245 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:33:36.245 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:33:36.245 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:33:36.245 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:36.245 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:33:36.245 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:36.245 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:33:36.245 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:36.245 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:33:36.245 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:36.245 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:36.245 rmmod nvme_tcp 00:33:36.245 rmmod nvme_fabrics 00:33:36.245 rmmod nvme_keyring 00:33:36.245 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:36.245 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:33:36.245 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:33:36.245 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 557343 ']' 00:33:36.245 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 557343 00:33:36.245 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 557343 ']' 00:33:36.246 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 557343 00:33:36.246 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:33:36.246 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:36.246 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 557343 00:33:36.246 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:36.246 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:36.246 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 557343' 00:33:36.246 killing process with pid 557343 00:33:36.246 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 557343 00:33:36.246 09:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 557343 00:33:36.246 [2024-11-19 09:51:22.983537] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:33:36.507 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:36.507 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:36.507 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:36.507 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:33:36.507 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:33:36.507 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:36.507 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:33:36.507 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:36.507 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:36.507 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.507 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.507 09:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.425 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:38.425 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:33:38.425 00:33:38.425 real 0m14.763s 00:33:38.425 user 0m19.894s 00:33:38.425 sys 0m7.361s 00:33:38.425 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:38.425 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:38.425 ************************************ 00:33:38.425 END TEST nvmf_host_management 00:33:38.425 ************************************ 00:33:38.425 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:38.425 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:38.425 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:38.425 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:38.688 ************************************ 00:33:38.688 START TEST nvmf_lvol 00:33:38.688 ************************************ 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:38.688 * Looking for test storage... 00:33:38.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.688 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:38.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.688 --rc genhtml_branch_coverage=1 00:33:38.688 --rc genhtml_function_coverage=1 00:33:38.688 --rc genhtml_legend=1 00:33:38.688 --rc geninfo_all_blocks=1 00:33:38.688 --rc geninfo_unexecuted_blocks=1 00:33:38.688 00:33:38.688 ' 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:38.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.689 --rc genhtml_branch_coverage=1 00:33:38.689 --rc genhtml_function_coverage=1 00:33:38.689 --rc genhtml_legend=1 00:33:38.689 --rc geninfo_all_blocks=1 00:33:38.689 --rc geninfo_unexecuted_blocks=1 00:33:38.689 00:33:38.689 ' 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:38.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.689 --rc genhtml_branch_coverage=1 00:33:38.689 --rc genhtml_function_coverage=1 00:33:38.689 --rc genhtml_legend=1 00:33:38.689 --rc geninfo_all_blocks=1 00:33:38.689 --rc geninfo_unexecuted_blocks=1 00:33:38.689 00:33:38.689 ' 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:38.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.689 --rc genhtml_branch_coverage=1 00:33:38.689 --rc genhtml_function_coverage=1 00:33:38.689 --rc genhtml_legend=1 00:33:38.689 --rc geninfo_all_blocks=1 00:33:38.689 --rc geninfo_unexecuted_blocks=1 00:33:38.689 00:33:38.689 ' 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.689 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:38.951 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:38.951 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:33:38.951 09:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:47.095 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:47.095 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:47.095 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:47.095 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:47.096 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:47.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:47.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.505 ms 00:33:47.096 00:33:47.096 --- 10.0.0.2 ping statistics --- 00:33:47.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.096 rtt min/avg/max/mdev = 0.505/0.505/0.505/0.000 ms 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:47.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:47.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:33:47.096 00:33:47.096 --- 10.0.0.1 ping statistics --- 00:33:47.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.096 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=562457 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 562457 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 562457 ']' 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:47.096 09:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:47.096 [2024-11-19 09:51:32.974283] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:47.096 [2024-11-19 09:51:32.975390] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:47.096 [2024-11-19 09:51:32.975438] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.096 [2024-11-19 09:51:33.075846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:47.096 [2024-11-19 09:51:33.128863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.096 [2024-11-19 09:51:33.128916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.096 [2024-11-19 09:51:33.128924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:47.096 [2024-11-19 09:51:33.128932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:47.096 [2024-11-19 09:51:33.128938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.096 [2024-11-19 09:51:33.130772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.096 [2024-11-19 09:51:33.130929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.096 [2024-11-19 09:51:33.130930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:47.096 [2024-11-19 09:51:33.206034] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:47.096 [2024-11-19 09:51:33.206898] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:47.096 [2024-11-19 09:51:33.207550] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:47.096 [2024-11-19 09:51:33.207646] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:47.096 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:47.096 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:33:47.096 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:47.096 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:47.096 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:47.096 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:47.096 09:51:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:47.357 [2024-11-19 09:51:33.995817] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.357 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:47.618 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:33:47.619 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:47.880 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:33:47.880 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:33:48.141 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:33:48.141 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f1b630e1-026e-416d-970a-00210365817c 00:33:48.141 09:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f1b630e1-026e-416d-970a-00210365817c lvol 20 00:33:48.401 09:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b7884f3c-db35-453e-84a2-e0b196f4f43d 00:33:48.401 09:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:48.663 09:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b7884f3c-db35-453e-84a2-e0b196f4f43d 00:33:48.924 09:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:48.924 [2024-11-19 09:51:35.595710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.924 09:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:49.185 09:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=562964 00:33:49.185 09:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:33:49.185 09:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:33:50.130 09:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b7884f3c-db35-453e-84a2-e0b196f4f43d MY_SNAPSHOT 00:33:50.391 09:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=30cede7c-715a-4b8f-b1a8-a9094f4bc3b5 00:33:50.391 09:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b7884f3c-db35-453e-84a2-e0b196f4f43d 30 00:33:50.652 09:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 30cede7c-715a-4b8f-b1a8-a9094f4bc3b5 MY_CLONE 00:33:50.914 09:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d9f514cb-adbb-467c-a28b-bd8605f5272e 00:33:50.914 09:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d9f514cb-adbb-467c-a28b-bd8605f5272e 00:33:51.486 09:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 562964 00:33:59.631 Initializing NVMe Controllers 00:33:59.631 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:59.631 Controller IO queue size 128, less than required. 00:33:59.631 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:59.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:33:59.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:33:59.631 Initialization complete. Launching workers. 00:33:59.631 ======================================================== 00:33:59.631 Latency(us) 00:33:59.631 Device Information : IOPS MiB/s Average min max 00:33:59.632 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15274.00 59.66 8382.40 1904.69 52120.63 00:33:59.632 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16842.60 65.79 7600.42 273.23 79609.06 00:33:59.632 ======================================================== 00:33:59.632 Total : 32116.60 125.46 7972.31 273.23 79609.06 00:33:59.632 00:33:59.632 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:59.892 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b7884f3c-db35-453e-84a2-e0b196f4f43d 00:33:59.892 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f1b630e1-026e-416d-970a-00210365817c 00:34:00.154 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:34:00.154 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:34:00.154 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:34:00.154 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:00.154 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:34:00.154 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:00.154 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:34:00.154 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:00.155 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:00.155 rmmod nvme_tcp 00:34:00.155 rmmod nvme_fabrics 00:34:00.155 rmmod nvme_keyring 00:34:00.155 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:00.155 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:34:00.155 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:34:00.155 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 562457 ']' 00:34:00.155 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 562457 00:34:00.155 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 562457 ']' 00:34:00.155 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 562457 00:34:00.155 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:34:00.155 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:00.155 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 562457 00:34:00.155 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:00.155 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:00.155 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 562457' 00:34:00.155 killing process with pid 562457 00:34:00.155 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 562457 00:34:00.155 09:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 562457 00:34:00.416 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:00.416 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:00.416 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:00.417 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:34:00.417 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:34:00.417 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:00.417 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:34:00.417 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:00.417 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:00.417 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.417 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:00.417 09:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:02.963 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:02.963 00:34:02.963 real 0m23.916s 00:34:02.963 user 0m56.024s 00:34:02.963 sys 0m10.862s 00:34:02.963 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:02.963 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:02.963 ************************************ 00:34:02.963 END TEST nvmf_lvol 00:34:02.963 ************************************ 00:34:02.963 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:02.963 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:02.963 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:02.963 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:02.963 ************************************ 00:34:02.963 START TEST nvmf_lvs_grow 00:34:02.963 ************************************ 00:34:02.963 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:02.963 * Looking for test storage... 00:34:02.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:02.963 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:02.963 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:34:02.963 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:02.963 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:02.963 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:02.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.964 --rc genhtml_branch_coverage=1 00:34:02.964 --rc genhtml_function_coverage=1 00:34:02.964 --rc genhtml_legend=1 00:34:02.964 --rc geninfo_all_blocks=1 00:34:02.964 --rc geninfo_unexecuted_blocks=1 00:34:02.964 00:34:02.964 ' 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:02.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.964 --rc genhtml_branch_coverage=1 00:34:02.964 --rc genhtml_function_coverage=1 00:34:02.964 --rc genhtml_legend=1 00:34:02.964 --rc geninfo_all_blocks=1 00:34:02.964 --rc geninfo_unexecuted_blocks=1 00:34:02.964 00:34:02.964 ' 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:02.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.964 --rc genhtml_branch_coverage=1 00:34:02.964 --rc genhtml_function_coverage=1 00:34:02.964 --rc genhtml_legend=1 00:34:02.964 --rc geninfo_all_blocks=1 00:34:02.964 --rc geninfo_unexecuted_blocks=1 00:34:02.964 00:34:02.964 ' 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:02.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.964 --rc genhtml_branch_coverage=1 00:34:02.964 --rc genhtml_function_coverage=1 00:34:02.964 --rc genhtml_legend=1 00:34:02.964 --rc geninfo_all_blocks=1 00:34:02.964 --rc geninfo_unexecuted_blocks=1 00:34:02.964 00:34:02.964 ' 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:02.964 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:02.965 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:02.965 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:02.965 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:02.965 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:02.965 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:34:02.965 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:02.965 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:02.965 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:02.965 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:02.965 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:02.965 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:02.965 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:02.965 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:02.965 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:02.965 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:02.965 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:34:02.965 09:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:11.111 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:11.111 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:11.111 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:11.112 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:11.112 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:11.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:11.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:34:11.112 00:34:11.112 --- 10.0.0.2 ping statistics --- 00:34:11.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.112 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:11.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:11.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:34:11.112 00:34:11.112 --- 10.0.0.1 ping statistics --- 00:34:11.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.112 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=569289 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 569289 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 569289 ']' 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.112 09:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:11.112 [2024-11-19 09:51:56.924652] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:11.112 [2024-11-19 09:51:56.926479] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:34:11.112 [2024-11-19 09:51:56.926559] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.112 [2024-11-19 09:51:57.028403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.112 [2024-11-19 09:51:57.079248] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:11.112 [2024-11-19 09:51:57.079301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:11.112 [2024-11-19 09:51:57.079310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:11.112 [2024-11-19 09:51:57.079318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:11.112 [2024-11-19 09:51:57.079324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:11.112 [2024-11-19 09:51:57.080092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.112 [2024-11-19 09:51:57.155931] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:11.112 [2024-11-19 09:51:57.156232] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:11.112 09:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.112 09:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:34:11.112 09:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:11.112 09:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:11.112 09:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:11.112 09:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:11.112 09:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:11.374 [2024-11-19 09:51:57.944963] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:11.374 09:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:34:11.374 09:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:11.374 09:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:11.374 09:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:11.374 ************************************ 00:34:11.374 START TEST lvs_grow_clean 00:34:11.374 ************************************ 00:34:11.374 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:34:11.374 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:11.374 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:11.374 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:11.374 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:11.374 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:11.374 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:11.374 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:11.374 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:11.374 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:11.637 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:11.637 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:11.898 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0bccc965-41f2-485f-b44e-5def01318ddc 00:34:11.898 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bccc965-41f2-485f-b44e-5def01318ddc 00:34:11.898 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:11.898 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:11.898 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:11.898 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0bccc965-41f2-485f-b44e-5def01318ddc lvol 150 00:34:12.159 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4b4dcc0e-1b57-4a1b-bcdc-e1c2520122bc 00:34:12.159 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:12.159 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:12.420 [2024-11-19 09:51:58.972639] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:12.420 [2024-11-19 09:51:58.972798] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:12.420 true 00:34:12.420 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:12.420 09:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bccc965-41f2-485f-b44e-5def01318ddc 00:34:12.681 09:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:12.681 09:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:12.681 09:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4b4dcc0e-1b57-4a1b-bcdc-e1c2520122bc 00:34:12.942 09:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:12.942 [2024-11-19 09:51:59.685283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:13.203 09:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:13.203 09:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=569704 00:34:13.203 09:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:13.203 09:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 569704 /var/tmp/bdevperf.sock 00:34:13.203 09:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:13.203 09:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 569704 ']' 00:34:13.203 09:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:13.203 09:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:13.203 09:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:13.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:13.203 09:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:13.203 09:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:13.203 [2024-11-19 09:51:59.924253] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:34:13.203 [2024-11-19 09:51:59.924321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid569704 ] 00:34:13.464 [2024-11-19 09:51:59.986786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:13.464 [2024-11-19 09:52:00.038166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:13.464 09:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:13.464 09:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:34:13.464 09:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:13.725 Nvme0n1 00:34:13.725 09:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:13.986 [ 00:34:13.986 { 00:34:13.986 "name": "Nvme0n1", 00:34:13.986 "aliases": [ 00:34:13.986 "4b4dcc0e-1b57-4a1b-bcdc-e1c2520122bc" 00:34:13.986 ], 00:34:13.986 "product_name": "NVMe disk", 00:34:13.986 "block_size": 4096, 00:34:13.986 "num_blocks": 38912, 00:34:13.986 "uuid": "4b4dcc0e-1b57-4a1b-bcdc-e1c2520122bc", 00:34:13.986 "numa_id": 0, 00:34:13.986 "assigned_rate_limits": { 00:34:13.986 "rw_ios_per_sec": 0, 00:34:13.986 "rw_mbytes_per_sec": 0, 00:34:13.986 "r_mbytes_per_sec": 0, 00:34:13.986 "w_mbytes_per_sec": 0 00:34:13.986 }, 00:34:13.986 "claimed": false, 00:34:13.987 "zoned": false, 00:34:13.987 "supported_io_types": { 00:34:13.987 "read": true, 00:34:13.987 "write": true, 00:34:13.987 "unmap": true, 00:34:13.987 "flush": true, 00:34:13.987 "reset": true, 00:34:13.987 "nvme_admin": true, 00:34:13.987 "nvme_io": true, 00:34:13.987 "nvme_io_md": false, 00:34:13.987 "write_zeroes": true, 00:34:13.987 "zcopy": false, 00:34:13.987 "get_zone_info": false, 00:34:13.987 "zone_management": false, 00:34:13.987 "zone_append": false, 00:34:13.987 "compare": true, 00:34:13.987 "compare_and_write": true, 00:34:13.987 "abort": true, 00:34:13.987 "seek_hole": false, 00:34:13.987 "seek_data": false, 00:34:13.987 "copy": true, 00:34:13.987 "nvme_iov_md": false 00:34:13.987 }, 00:34:13.987 "memory_domains": [ 00:34:13.987 { 00:34:13.987 "dma_device_id": "system", 00:34:13.987 "dma_device_type": 1 00:34:13.987 } 00:34:13.987 ], 00:34:13.987 "driver_specific": { 00:34:13.987 "nvme": [ 00:34:13.987 { 00:34:13.987 "trid": { 00:34:13.987 "trtype": "TCP", 00:34:13.987 "adrfam": "IPv4", 00:34:13.987 "traddr": "10.0.0.2", 00:34:13.987 "trsvcid": "4420", 00:34:13.987 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:13.987 }, 00:34:13.987 "ctrlr_data": { 00:34:13.987 "cntlid": 1, 00:34:13.987 "vendor_id": "0x8086", 00:34:13.987 "model_number": "SPDK bdev Controller", 00:34:13.987 "serial_number": "SPDK0", 00:34:13.987 "firmware_revision": "25.01", 00:34:13.987 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:13.987 "oacs": { 00:34:13.987 "security": 0, 00:34:13.987 "format": 0, 00:34:13.987 "firmware": 0, 00:34:13.987 "ns_manage": 0 00:34:13.987 }, 00:34:13.987 "multi_ctrlr": true, 00:34:13.987 "ana_reporting": false 00:34:13.987 }, 00:34:13.987 "vs": { 00:34:13.987 "nvme_version": "1.3" 00:34:13.987 }, 00:34:13.987 "ns_data": { 00:34:13.987 "id": 1, 00:34:13.987 "can_share": true 00:34:13.987 } 00:34:13.987 } 00:34:13.987 ], 00:34:13.987 "mp_policy": "active_passive" 00:34:13.987 } 00:34:13.987 } 00:34:13.987 ] 00:34:13.987 09:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=570013 00:34:13.987 09:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:13.987 09:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:13.987 Running I/O for 10 seconds... 00:34:15.373 Latency(us) 00:34:15.373 [2024-11-19T08:52:02.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:15.373 Nvme0n1 : 1.00 16901.00 66.02 0.00 0.00 0.00 0.00 0.00 00:34:15.373 [2024-11-19T08:52:02.121Z] =================================================================================================================== 00:34:15.373 [2024-11-19T08:52:02.121Z] Total : 16901.00 66.02 0.00 0.00 0.00 0.00 0.00 00:34:15.373 00:34:15.946 09:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0bccc965-41f2-485f-b44e-5def01318ddc 00:34:15.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:15.946 Nvme0n1 : 2.00 17086.50 66.74 0.00 0.00 0.00 0.00 0.00 00:34:15.946 [2024-11-19T08:52:02.694Z] =================================================================================================================== 00:34:15.946 [2024-11-19T08:52:02.694Z] Total : 17086.50 66.74 0.00 0.00 0.00 0.00 0.00 00:34:15.946 00:34:16.207 true 00:34:16.207 09:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bccc965-41f2-485f-b44e-5def01318ddc 00:34:16.207 09:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:16.467 09:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:16.467 09:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:16.467 09:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 570013 00:34:17.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:17.038 Nvme0n1 : 3.00 17360.00 67.81 0.00 0.00 0.00 0.00 0.00 00:34:17.038 [2024-11-19T08:52:03.786Z] =================================================================================================================== 00:34:17.038 [2024-11-19T08:52:03.786Z] Total : 17360.00 67.81 0.00 0.00 0.00 0.00 0.00 00:34:17.038 00:34:17.980 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:17.980 Nvme0n1 : 4.00 17560.25 68.59 0.00 0.00 0.00 0.00 0.00 00:34:17.980 [2024-11-19T08:52:04.728Z] =================================================================================================================== 00:34:17.980 [2024-11-19T08:52:04.728Z] Total : 17560.25 68.59 0.00 0.00 0.00 0.00 0.00 00:34:17.980 00:34:19.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:19.365 Nvme0n1 : 5.00 19153.60 74.82 0.00 0.00 0.00 0.00 0.00 00:34:19.365 [2024-11-19T08:52:06.113Z] =================================================================================================================== 00:34:19.365 [2024-11-19T08:52:06.113Z] Total : 19153.60 74.82 0.00 0.00 0.00 0.00 0.00 00:34:19.365 00:34:20.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:20.307 Nvme0n1 : 6.00 20215.83 78.97 0.00 0.00 0.00 0.00 0.00 00:34:20.307 [2024-11-19T08:52:07.055Z] =================================================================================================================== 00:34:20.307 [2024-11-19T08:52:07.055Z] Total : 20215.83 78.97 0.00 0.00 0.00 0.00 0.00 00:34:20.307 00:34:21.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:21.248 Nvme0n1 : 7.00 20974.57 81.93 0.00 0.00 0.00 0.00 0.00 00:34:21.248 [2024-11-19T08:52:07.996Z] =================================================================================================================== 00:34:21.248 [2024-11-19T08:52:07.996Z] Total : 20974.57 81.93 0.00 0.00 0.00 0.00 0.00 00:34:21.248 00:34:22.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:22.191 Nvme0n1 : 8.00 21543.62 84.15 0.00 0.00 0.00 0.00 0.00 00:34:22.191 [2024-11-19T08:52:08.939Z] =================================================================================================================== 00:34:22.191 [2024-11-19T08:52:08.939Z] Total : 21543.62 84.15 0.00 0.00 0.00 0.00 0.00 00:34:22.191 00:34:23.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:23.133 Nvme0n1 : 9.00 21986.22 85.88 0.00 0.00 0.00 0.00 0.00 00:34:23.133 [2024-11-19T08:52:09.881Z] =================================================================================================================== 00:34:23.133 [2024-11-19T08:52:09.881Z] Total : 21986.22 85.88 0.00 0.00 0.00 0.00 0.00 00:34:23.133 00:34:24.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:24.075 Nvme0n1 : 10.00 22340.30 87.27 0.00 0.00 0.00 0.00 0.00 00:34:24.075 [2024-11-19T08:52:10.823Z] =================================================================================================================== 00:34:24.075 [2024-11-19T08:52:10.823Z] Total : 22340.30 87.27 0.00 0.00 0.00 0.00 0.00 00:34:24.075 00:34:24.075 00:34:24.075 Latency(us) 00:34:24.075 [2024-11-19T08:52:10.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:24.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:24.075 Nvme0n1 : 10.00 22343.03 87.28 0.00 0.00 5726.24 3440.64 28617.39 00:34:24.075 [2024-11-19T08:52:10.823Z] =================================================================================================================== 00:34:24.075 [2024-11-19T08:52:10.823Z] Total : 22343.03 87.28 0.00 0.00 5726.24 3440.64 28617.39 00:34:24.075 { 00:34:24.075 "results": [ 00:34:24.075 { 00:34:24.075 "job": "Nvme0n1", 00:34:24.075 "core_mask": "0x2", 00:34:24.075 "workload": "randwrite", 00:34:24.075 "status": "finished", 00:34:24.075 "queue_depth": 128, 00:34:24.075 "io_size": 4096, 00:34:24.075 "runtime": 10.004508, 00:34:24.075 "iops": 22343.027763084403, 00:34:24.075 "mibps": 87.27745219954845, 00:34:24.075 "io_failed": 0, 00:34:24.075 "io_timeout": 0, 00:34:24.075 "avg_latency_us": 5726.240867292084, 00:34:24.075 "min_latency_us": 3440.64, 00:34:24.075 "max_latency_us": 28617.386666666665 00:34:24.075 } 00:34:24.075 ], 00:34:24.075 "core_count": 1 00:34:24.075 } 00:34:24.075 09:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 569704 00:34:24.075 09:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 569704 ']' 00:34:24.075 09:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 569704 00:34:24.075 09:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:34:24.075 09:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:24.075 09:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 569704 00:34:24.075 09:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:24.075 09:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:24.075 09:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 569704' 00:34:24.075 killing process with pid 569704 00:34:24.075 09:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 569704 00:34:24.075 Received shutdown signal, test time was about 10.000000 seconds 00:34:24.075 00:34:24.075 Latency(us) 00:34:24.075 [2024-11-19T08:52:10.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:24.075 [2024-11-19T08:52:10.823Z] =================================================================================================================== 00:34:24.075 [2024-11-19T08:52:10.823Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:24.075 09:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 569704 00:34:24.337 09:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:24.337 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:24.598 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bccc965-41f2-485f-b44e-5def01318ddc 00:34:24.598 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:24.860 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:24.860 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:34:24.860 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:25.120 [2024-11-19 09:52:11.608697] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:25.120 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bccc965-41f2-485f-b44e-5def01318ddc 00:34:25.120 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:34:25.121 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bccc965-41f2-485f-b44e-5def01318ddc 00:34:25.121 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:25.121 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:25.121 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:25.121 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:25.121 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:25.121 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:25.121 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:25.121 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:25.121 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bccc965-41f2-485f-b44e-5def01318ddc 00:34:25.121 request: 00:34:25.121 { 00:34:25.121 "uuid": "0bccc965-41f2-485f-b44e-5def01318ddc", 00:34:25.121 "method": "bdev_lvol_get_lvstores", 00:34:25.121 "req_id": 1 00:34:25.121 } 00:34:25.121 Got JSON-RPC error response 00:34:25.121 response: 00:34:25.121 { 00:34:25.121 "code": -19, 00:34:25.121 "message": "No such device" 00:34:25.121 } 00:34:25.121 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:34:25.121 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:25.121 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:25.121 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:25.121 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:25.381 aio_bdev 00:34:25.381 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4b4dcc0e-1b57-4a1b-bcdc-e1c2520122bc 00:34:25.381 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=4b4dcc0e-1b57-4a1b-bcdc-e1c2520122bc 00:34:25.381 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:25.381 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:34:25.381 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:25.381 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:25.381 09:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:25.642 09:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4b4dcc0e-1b57-4a1b-bcdc-e1c2520122bc -t 2000 00:34:25.642 [ 00:34:25.642 { 00:34:25.642 "name": "4b4dcc0e-1b57-4a1b-bcdc-e1c2520122bc", 00:34:25.642 "aliases": [ 00:34:25.642 "lvs/lvol" 00:34:25.642 ], 00:34:25.642 "product_name": "Logical Volume", 00:34:25.642 "block_size": 4096, 00:34:25.642 "num_blocks": 38912, 00:34:25.642 "uuid": "4b4dcc0e-1b57-4a1b-bcdc-e1c2520122bc", 00:34:25.642 "assigned_rate_limits": { 00:34:25.642 "rw_ios_per_sec": 0, 00:34:25.642 "rw_mbytes_per_sec": 0, 00:34:25.642 "r_mbytes_per_sec": 0, 00:34:25.642 "w_mbytes_per_sec": 0 00:34:25.642 }, 00:34:25.642 "claimed": false, 00:34:25.642 "zoned": false, 00:34:25.642 "supported_io_types": { 00:34:25.642 "read": true, 00:34:25.642 "write": true, 00:34:25.642 "unmap": true, 00:34:25.642 "flush": false, 00:34:25.642 "reset": true, 00:34:25.642 "nvme_admin": false, 00:34:25.642 "nvme_io": false, 00:34:25.642 "nvme_io_md": false, 00:34:25.642 "write_zeroes": true, 00:34:25.642 "zcopy": false, 00:34:25.642 "get_zone_info": false, 00:34:25.642 "zone_management": false, 00:34:25.642 "zone_append": false, 00:34:25.642 "compare": false, 00:34:25.642 "compare_and_write": false, 00:34:25.642 "abort": false, 00:34:25.642 "seek_hole": true, 00:34:25.642 "seek_data": true, 00:34:25.642 "copy": false, 00:34:25.642 "nvme_iov_md": false 00:34:25.642 }, 00:34:25.642 "driver_specific": { 00:34:25.642 "lvol": { 00:34:25.642 "lvol_store_uuid": "0bccc965-41f2-485f-b44e-5def01318ddc", 00:34:25.642 "base_bdev": "aio_bdev", 00:34:25.642 "thin_provision": false, 00:34:25.642 "num_allocated_clusters": 38, 00:34:25.642 "snapshot": false, 00:34:25.642 "clone": false, 00:34:25.642 "esnap_clone": false 00:34:25.642 } 00:34:25.642 } 00:34:25.642 } 00:34:25.642 ] 00:34:25.642 09:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:34:25.642 09:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bccc965-41f2-485f-b44e-5def01318ddc 00:34:25.642 09:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:25.904 09:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:25.904 09:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0bccc965-41f2-485f-b44e-5def01318ddc 00:34:25.904 09:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:26.166 09:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:26.166 09:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4b4dcc0e-1b57-4a1b-bcdc-e1c2520122bc 00:34:26.166 09:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0bccc965-41f2-485f-b44e-5def01318ddc 00:34:26.428 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:26.689 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:26.689 00:34:26.689 real 0m15.245s 00:34:26.689 user 0m14.806s 00:34:26.689 sys 0m1.431s 00:34:26.689 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.689 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:26.689 ************************************ 00:34:26.689 END TEST lvs_grow_clean 00:34:26.689 ************************************ 00:34:26.689 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:34:26.689 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:26.689 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.689 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:26.689 ************************************ 00:34:26.689 START TEST lvs_grow_dirty 00:34:26.689 ************************************ 00:34:26.689 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:34:26.689 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:26.689 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:26.689 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:26.689 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:26.689 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:26.689 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:26.689 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:26.689 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:26.689 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:26.950 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:26.950 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:27.210 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=537f48ec-21c0-4482-89c5-a1cbf58d71d9 00:34:27.211 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 537f48ec-21c0-4482-89c5-a1cbf58d71d9 00:34:27.211 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:27.211 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:27.211 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:27.211 09:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 537f48ec-21c0-4482-89c5-a1cbf58d71d9 lvol 150 00:34:27.471 09:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a86b31d8-6f24-45ba-b951-edca3dfa26e3 00:34:27.471 09:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:27.471 09:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:27.471 [2024-11-19 09:52:14.212623] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:27.471 [2024-11-19 09:52:14.212768] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:27.732 true 00:34:27.732 09:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 537f48ec-21c0-4482-89c5-a1cbf58d71d9 00:34:27.732 09:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:27.732 09:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:27.732 09:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:27.993 09:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a86b31d8-6f24-45ba-b951-edca3dfa26e3 00:34:27.993 09:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:28.254 [2024-11-19 09:52:14.849092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:28.254 09:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:28.515 09:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=572750 00:34:28.515 09:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:28.515 09:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:28.515 09:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 572750 /var/tmp/bdevperf.sock 00:34:28.515 09:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 572750 ']' 00:34:28.515 09:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:28.515 09:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:28.515 09:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:28.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:28.515 09:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:28.516 09:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:28.516 [2024-11-19 09:52:15.081738] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:34:28.516 [2024-11-19 09:52:15.081791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid572750 ] 00:34:28.516 [2024-11-19 09:52:15.164290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.516 [2024-11-19 09:52:15.194043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:29.457 09:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:29.457 09:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:34:29.457 09:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:29.457 Nvme0n1 00:34:29.457 09:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:29.718 [ 00:34:29.718 { 00:34:29.718 "name": "Nvme0n1", 00:34:29.718 "aliases": [ 00:34:29.718 "a86b31d8-6f24-45ba-b951-edca3dfa26e3" 00:34:29.718 ], 00:34:29.718 "product_name": "NVMe disk", 00:34:29.718 "block_size": 4096, 00:34:29.718 "num_blocks": 38912, 00:34:29.718 "uuid": "a86b31d8-6f24-45ba-b951-edca3dfa26e3", 00:34:29.718 "numa_id": 0, 00:34:29.718 "assigned_rate_limits": { 00:34:29.718 "rw_ios_per_sec": 0, 00:34:29.718 "rw_mbytes_per_sec": 0, 00:34:29.718 "r_mbytes_per_sec": 0, 00:34:29.718 "w_mbytes_per_sec": 0 00:34:29.718 }, 00:34:29.718 "claimed": false, 00:34:29.718 "zoned": false, 00:34:29.718 "supported_io_types": { 00:34:29.718 "read": true, 00:34:29.718 "write": true, 00:34:29.718 "unmap": true, 00:34:29.718 "flush": true, 00:34:29.718 "reset": true, 00:34:29.718 "nvme_admin": true, 00:34:29.718 "nvme_io": true, 00:34:29.718 "nvme_io_md": false, 00:34:29.718 "write_zeroes": true, 00:34:29.718 "zcopy": false, 00:34:29.718 "get_zone_info": false, 00:34:29.718 "zone_management": false, 00:34:29.718 "zone_append": false, 00:34:29.718 "compare": true, 00:34:29.718 "compare_and_write": true, 00:34:29.718 "abort": true, 00:34:29.718 "seek_hole": false, 00:34:29.718 "seek_data": false, 00:34:29.718 "copy": true, 00:34:29.718 "nvme_iov_md": false 00:34:29.718 }, 00:34:29.718 "memory_domains": [ 00:34:29.718 { 00:34:29.718 "dma_device_id": "system", 00:34:29.718 "dma_device_type": 1 00:34:29.718 } 00:34:29.718 ], 00:34:29.718 "driver_specific": { 00:34:29.718 "nvme": [ 00:34:29.718 { 00:34:29.718 "trid": { 00:34:29.718 "trtype": "TCP", 00:34:29.718 "adrfam": "IPv4", 00:34:29.718 "traddr": "10.0.0.2", 00:34:29.718 "trsvcid": "4420", 00:34:29.718 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:29.718 }, 00:34:29.718 "ctrlr_data": { 00:34:29.718 "cntlid": 1, 00:34:29.718 "vendor_id": "0x8086", 00:34:29.718 "model_number": "SPDK bdev Controller", 00:34:29.718 "serial_number": "SPDK0", 00:34:29.718 "firmware_revision": "25.01", 00:34:29.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.718 "oacs": { 00:34:29.718 "security": 0, 00:34:29.718 "format": 0, 00:34:29.718 "firmware": 0, 00:34:29.718 "ns_manage": 0 00:34:29.718 }, 00:34:29.718 "multi_ctrlr": true, 00:34:29.719 "ana_reporting": false 00:34:29.719 }, 00:34:29.719 "vs": { 00:34:29.719 "nvme_version": "1.3" 00:34:29.719 }, 00:34:29.719 "ns_data": { 00:34:29.719 "id": 1, 00:34:29.719 "can_share": true 00:34:29.719 } 00:34:29.719 } 00:34:29.719 ], 00:34:29.719 "mp_policy": "active_passive" 00:34:29.719 } 00:34:29.719 } 00:34:29.719 ] 00:34:29.719 09:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=572885 00:34:29.719 09:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:29.719 09:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:29.719 Running I/O for 10 seconds... 00:34:31.103 Latency(us) 00:34:31.103 [2024-11-19T08:52:17.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:31.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:31.103 Nvme0n1 : 1.00 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:34:31.103 [2024-11-19T08:52:17.851Z] =================================================================================================================== 00:34:31.103 [2024-11-19T08:52:17.851Z] Total : 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:34:31.103 00:34:31.676 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 537f48ec-21c0-4482-89c5-a1cbf58d71d9 00:34:31.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:31.936 Nvme0n1 : 2.00 17716.50 69.21 0.00 0.00 0.00 0.00 0.00 00:34:31.936 [2024-11-19T08:52:18.684Z] =================================================================================================================== 00:34:31.936 [2024-11-19T08:52:18.684Z] Total : 17716.50 69.21 0.00 0.00 0.00 0.00 0.00 00:34:31.936 00:34:31.936 true 00:34:31.936 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 537f48ec-21c0-4482-89c5-a1cbf58d71d9 00:34:31.936 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:32.196 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:32.196 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:32.196 09:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 572885 00:34:32.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:32.766 Nvme0n1 : 3.00 17822.33 69.62 0.00 0.00 0.00 0.00 0.00 00:34:32.766 [2024-11-19T08:52:19.514Z] =================================================================================================================== 00:34:32.766 [2024-11-19T08:52:19.514Z] Total : 17822.33 69.62 0.00 0.00 0.00 0.00 0.00 00:34:32.766 00:34:34.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:34.153 Nvme0n1 : 4.00 17875.25 69.83 0.00 0.00 0.00 0.00 0.00 00:34:34.153 [2024-11-19T08:52:20.901Z] =================================================================================================================== 00:34:34.153 [2024-11-19T08:52:20.901Z] Total : 17875.25 69.83 0.00 0.00 0.00 0.00 0.00 00:34:34.153 00:34:34.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:34.726 Nvme0n1 : 5.00 19024.60 74.31 0.00 0.00 0.00 0.00 0.00 00:34:34.726 [2024-11-19T08:52:21.474Z] =================================================================================================================== 00:34:34.726 [2024-11-19T08:52:21.474Z] Total : 19024.60 74.31 0.00 0.00 0.00 0.00 0.00 00:34:34.726 00:34:36.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:36.111 Nvme0n1 : 6.00 20108.33 78.55 0.00 0.00 0.00 0.00 0.00 00:34:36.111 [2024-11-19T08:52:22.859Z] =================================================================================================================== 00:34:36.111 [2024-11-19T08:52:22.859Z] Total : 20108.33 78.55 0.00 0.00 0.00 0.00 0.00 00:34:36.111 00:34:37.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:37.055 Nvme0n1 : 7.00 20882.43 81.57 0.00 0.00 0.00 0.00 0.00 00:34:37.055 [2024-11-19T08:52:23.803Z] =================================================================================================================== 00:34:37.055 [2024-11-19T08:52:23.803Z] Total : 20882.43 81.57 0.00 0.00 0.00 0.00 0.00 00:34:37.055 00:34:37.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:37.996 Nvme0n1 : 8.00 21463.00 83.84 0.00 0.00 0.00 0.00 0.00 00:34:37.996 [2024-11-19T08:52:24.744Z] =================================================================================================================== 00:34:37.996 [2024-11-19T08:52:24.744Z] Total : 21463.00 83.84 0.00 0.00 0.00 0.00 0.00 00:34:37.996 00:34:38.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:38.940 Nvme0n1 : 9.00 21914.67 85.60 0.00 0.00 0.00 0.00 0.00 00:34:38.940 [2024-11-19T08:52:25.688Z] =================================================================================================================== 00:34:38.940 [2024-11-19T08:52:25.688Z] Total : 21914.67 85.60 0.00 0.00 0.00 0.00 0.00 00:34:38.940 00:34:39.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:39.882 Nvme0n1 : 10.00 22275.90 87.02 0.00 0.00 0.00 0.00 0.00 00:34:39.882 [2024-11-19T08:52:26.630Z] =================================================================================================================== 00:34:39.882 [2024-11-19T08:52:26.630Z] Total : 22275.90 87.02 0.00 0.00 0.00 0.00 0.00 00:34:39.882 00:34:39.882 00:34:39.882 Latency(us) 00:34:39.882 [2024-11-19T08:52:26.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:39.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:39.882 Nvme0n1 : 10.00 22282.11 87.04 0.00 0.00 5741.82 4560.21 31238.83 00:34:39.882 [2024-11-19T08:52:26.630Z] =================================================================================================================== 00:34:39.882 [2024-11-19T08:52:26.630Z] Total : 22282.11 87.04 0.00 0.00 5741.82 4560.21 31238.83 00:34:39.882 { 00:34:39.882 "results": [ 00:34:39.882 { 00:34:39.882 "job": "Nvme0n1", 00:34:39.882 "core_mask": "0x2", 00:34:39.882 "workload": "randwrite", 00:34:39.882 "status": "finished", 00:34:39.882 "queue_depth": 128, 00:34:39.882 "io_size": 4096, 00:34:39.882 "runtime": 10.002956, 00:34:39.882 "iops": 22282.113407276807, 00:34:39.882 "mibps": 87.03950549717503, 00:34:39.882 "io_failed": 0, 00:34:39.882 "io_timeout": 0, 00:34:39.882 "avg_latency_us": 5741.8184375042065, 00:34:39.882 "min_latency_us": 4560.213333333333, 00:34:39.882 "max_latency_us": 31238.826666666668 00:34:39.882 } 00:34:39.882 ], 00:34:39.882 "core_count": 1 00:34:39.883 } 00:34:39.883 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 572750 00:34:39.883 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 572750 ']' 00:34:39.883 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 572750 00:34:39.883 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:34:39.883 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:39.883 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 572750 00:34:39.883 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:39.883 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:39.883 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 572750' 00:34:39.883 killing process with pid 572750 00:34:39.883 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 572750 00:34:39.883 Received shutdown signal, test time was about 10.000000 seconds 00:34:39.883 00:34:39.883 Latency(us) 00:34:39.883 [2024-11-19T08:52:26.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:39.883 [2024-11-19T08:52:26.631Z] =================================================================================================================== 00:34:39.883 [2024-11-19T08:52:26.631Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:39.883 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 572750 00:34:40.144 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:40.144 09:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:40.405 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 537f48ec-21c0-4482-89c5-a1cbf58d71d9 00:34:40.405 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 569289 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 569289 00:34:40.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 569289 Killed "${NVMF_APP[@]}" "$@" 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=574972 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 574972 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 574972 ']' 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:40.667 09:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:40.667 [2024-11-19 09:52:27.323009] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:40.668 [2024-11-19 09:52:27.324116] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:34:40.668 [2024-11-19 09:52:27.324188] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:40.929 [2024-11-19 09:52:27.417888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:40.929 [2024-11-19 09:52:27.451272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:40.929 [2024-11-19 09:52:27.451302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:40.929 [2024-11-19 09:52:27.451308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:40.929 [2024-11-19 09:52:27.451313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:40.929 [2024-11-19 09:52:27.451317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:40.929 [2024-11-19 09:52:27.451785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:40.929 [2024-11-19 09:52:27.503871] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:40.929 [2024-11-19 09:52:27.504065] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:41.500 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:41.500 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:34:41.500 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:41.500 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:41.500 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:41.500 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:41.500 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:41.761 [2024-11-19 09:52:28.338247] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:41.761 [2024-11-19 09:52:28.338508] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:41.761 [2024-11-19 09:52:28.338599] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:41.761 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:34:41.761 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a86b31d8-6f24-45ba-b951-edca3dfa26e3 00:34:41.761 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a86b31d8-6f24-45ba-b951-edca3dfa26e3 00:34:41.761 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:41.761 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:41.761 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:41.761 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:41.761 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:42.023 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a86b31d8-6f24-45ba-b951-edca3dfa26e3 -t 2000 00:34:42.023 [ 00:34:42.023 { 00:34:42.023 "name": "a86b31d8-6f24-45ba-b951-edca3dfa26e3", 00:34:42.023 "aliases": [ 00:34:42.023 "lvs/lvol" 00:34:42.023 ], 00:34:42.023 "product_name": "Logical Volume", 00:34:42.023 "block_size": 4096, 00:34:42.023 "num_blocks": 38912, 00:34:42.023 "uuid": "a86b31d8-6f24-45ba-b951-edca3dfa26e3", 00:34:42.023 "assigned_rate_limits": { 00:34:42.023 "rw_ios_per_sec": 0, 00:34:42.023 "rw_mbytes_per_sec": 0, 00:34:42.023 "r_mbytes_per_sec": 0, 00:34:42.023 "w_mbytes_per_sec": 0 00:34:42.023 }, 00:34:42.023 "claimed": false, 00:34:42.023 "zoned": false, 00:34:42.023 "supported_io_types": { 00:34:42.023 "read": true, 00:34:42.023 "write": true, 00:34:42.023 "unmap": true, 00:34:42.023 "flush": false, 00:34:42.023 "reset": true, 00:34:42.023 "nvme_admin": false, 00:34:42.023 "nvme_io": false, 00:34:42.023 "nvme_io_md": false, 00:34:42.023 "write_zeroes": true, 00:34:42.023 "zcopy": false, 00:34:42.023 "get_zone_info": false, 00:34:42.023 "zone_management": false, 00:34:42.023 "zone_append": false, 00:34:42.023 "compare": false, 00:34:42.023 "compare_and_write": false, 00:34:42.023 "abort": false, 00:34:42.023 "seek_hole": true, 00:34:42.023 "seek_data": true, 00:34:42.023 "copy": false, 00:34:42.023 "nvme_iov_md": false 00:34:42.023 }, 00:34:42.023 "driver_specific": { 00:34:42.023 "lvol": { 00:34:42.023 "lvol_store_uuid": "537f48ec-21c0-4482-89c5-a1cbf58d71d9", 00:34:42.023 "base_bdev": "aio_bdev", 00:34:42.023 "thin_provision": false, 00:34:42.023 "num_allocated_clusters": 38, 00:34:42.023 "snapshot": false, 00:34:42.023 "clone": false, 00:34:42.023 "esnap_clone": false 00:34:42.023 } 00:34:42.023 } 00:34:42.023 } 00:34:42.023 ] 00:34:42.023 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:42.023 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:34:42.023 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 537f48ec-21c0-4482-89c5-a1cbf58d71d9 00:34:42.285 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:34:42.285 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 537f48ec-21c0-4482-89c5-a1cbf58d71d9 00:34:42.285 09:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:34:42.546 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:34:42.546 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:42.546 [2024-11-19 09:52:29.260360] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:42.807 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 537f48ec-21c0-4482-89c5-a1cbf58d71d9 00:34:42.807 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:34:42.807 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 537f48ec-21c0-4482-89c5-a1cbf58d71d9 00:34:42.807 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:42.807 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:42.807 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:42.807 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:42.807 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:42.807 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:42.807 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:42.807 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:42.807 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 537f48ec-21c0-4482-89c5-a1cbf58d71d9 00:34:42.807 request: 00:34:42.807 { 00:34:42.807 "uuid": "537f48ec-21c0-4482-89c5-a1cbf58d71d9", 00:34:42.807 "method": "bdev_lvol_get_lvstores", 00:34:42.807 "req_id": 1 00:34:42.807 } 00:34:42.807 Got JSON-RPC error response 00:34:42.807 response: 00:34:42.807 { 00:34:42.807 "code": -19, 00:34:42.807 "message": "No such device" 00:34:42.807 } 00:34:42.807 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:34:42.807 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:42.807 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:42.807 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:42.807 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:43.068 aio_bdev 00:34:43.068 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a86b31d8-6f24-45ba-b951-edca3dfa26e3 00:34:43.068 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a86b31d8-6f24-45ba-b951-edca3dfa26e3 00:34:43.068 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:43.068 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:43.068 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:43.068 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:43.068 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:43.329 09:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a86b31d8-6f24-45ba-b951-edca3dfa26e3 -t 2000 00:34:43.329 [ 00:34:43.329 { 00:34:43.329 "name": "a86b31d8-6f24-45ba-b951-edca3dfa26e3", 00:34:43.329 "aliases": [ 00:34:43.329 "lvs/lvol" 00:34:43.329 ], 00:34:43.329 "product_name": "Logical Volume", 00:34:43.329 "block_size": 4096, 00:34:43.329 "num_blocks": 38912, 00:34:43.329 "uuid": "a86b31d8-6f24-45ba-b951-edca3dfa26e3", 00:34:43.329 "assigned_rate_limits": { 00:34:43.329 "rw_ios_per_sec": 0, 00:34:43.329 "rw_mbytes_per_sec": 0, 00:34:43.329 "r_mbytes_per_sec": 0, 00:34:43.329 "w_mbytes_per_sec": 0 00:34:43.329 }, 00:34:43.329 "claimed": false, 00:34:43.329 "zoned": false, 00:34:43.329 "supported_io_types": { 00:34:43.329 "read": true, 00:34:43.329 "write": true, 00:34:43.329 "unmap": true, 00:34:43.330 "flush": false, 00:34:43.330 "reset": true, 00:34:43.330 "nvme_admin": false, 00:34:43.330 "nvme_io": false, 00:34:43.330 "nvme_io_md": false, 00:34:43.330 "write_zeroes": true, 00:34:43.330 "zcopy": false, 00:34:43.330 "get_zone_info": false, 00:34:43.330 "zone_management": false, 00:34:43.330 "zone_append": false, 00:34:43.330 "compare": false, 00:34:43.330 "compare_and_write": false, 00:34:43.330 "abort": false, 00:34:43.330 "seek_hole": true, 00:34:43.330 "seek_data": true, 00:34:43.330 "copy": false, 00:34:43.330 "nvme_iov_md": false 00:34:43.330 }, 00:34:43.330 "driver_specific": { 00:34:43.330 "lvol": { 00:34:43.330 "lvol_store_uuid": "537f48ec-21c0-4482-89c5-a1cbf58d71d9", 00:34:43.330 "base_bdev": "aio_bdev", 00:34:43.330 "thin_provision": false, 00:34:43.330 "num_allocated_clusters": 38, 00:34:43.330 "snapshot": false, 00:34:43.330 "clone": false, 00:34:43.330 "esnap_clone": false 00:34:43.330 } 00:34:43.330 } 00:34:43.330 } 00:34:43.330 ] 00:34:43.330 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:43.330 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 537f48ec-21c0-4482-89c5-a1cbf58d71d9 00:34:43.330 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:43.590 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:43.590 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 537f48ec-21c0-4482-89c5-a1cbf58d71d9 00:34:43.591 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:43.851 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:43.851 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a86b31d8-6f24-45ba-b951-edca3dfa26e3 00:34:43.851 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 537f48ec-21c0-4482-89c5-a1cbf58d71d9 00:34:44.112 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:44.373 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:44.373 00:34:44.373 real 0m17.668s 00:34:44.373 user 0m35.508s 00:34:44.373 sys 0m3.081s 00:34:44.373 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:44.374 09:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:44.374 ************************************ 00:34:44.374 END TEST lvs_grow_dirty 00:34:44.374 ************************************ 00:34:44.374 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:34:44.374 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:34:44.374 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:34:44.374 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:34:44.374 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:34:44.374 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:34:44.374 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:34:44.374 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:34:44.374 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:34:44.374 nvmf_trace.0 00:34:44.374 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:34:44.374 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:34:44.374 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:44.374 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:34:44.374 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:44.374 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:34:44.374 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:44.374 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:44.374 rmmod nvme_tcp 00:34:44.635 rmmod nvme_fabrics 00:34:44.635 rmmod nvme_keyring 00:34:44.635 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:44.635 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:34:44.635 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:34:44.635 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 574972 ']' 00:34:44.635 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 574972 00:34:44.635 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 574972 ']' 00:34:44.635 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 574972 00:34:44.635 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:34:44.635 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:44.635 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 574972 00:34:44.635 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:44.635 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:44.635 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 574972' 00:34:44.635 killing process with pid 574972 00:34:44.635 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 574972 00:34:44.635 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 574972 00:34:44.896 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:44.896 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:44.896 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:44.896 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:34:44.896 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:34:44.896 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:44.896 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:34:44.896 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:44.896 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:44.896 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.896 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:44.896 09:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:46.810 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:46.810 00:34:46.810 real 0m44.299s 00:34:46.810 user 0m53.301s 00:34:46.810 sys 0m10.645s 00:34:46.810 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:46.810 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:46.810 ************************************ 00:34:46.810 END TEST nvmf_lvs_grow 00:34:46.810 ************************************ 00:34:46.810 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:46.810 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:46.810 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:46.810 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:47.072 ************************************ 00:34:47.072 START TEST nvmf_bdev_io_wait 00:34:47.073 ************************************ 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:47.073 * Looking for test storage... 00:34:47.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:47.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.073 --rc genhtml_branch_coverage=1 00:34:47.073 --rc genhtml_function_coverage=1 00:34:47.073 --rc genhtml_legend=1 00:34:47.073 --rc geninfo_all_blocks=1 00:34:47.073 --rc geninfo_unexecuted_blocks=1 00:34:47.073 00:34:47.073 ' 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:47.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.073 --rc genhtml_branch_coverage=1 00:34:47.073 --rc genhtml_function_coverage=1 00:34:47.073 --rc genhtml_legend=1 00:34:47.073 --rc geninfo_all_blocks=1 00:34:47.073 --rc geninfo_unexecuted_blocks=1 00:34:47.073 00:34:47.073 ' 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:47.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.073 --rc genhtml_branch_coverage=1 00:34:47.073 --rc genhtml_function_coverage=1 00:34:47.073 --rc genhtml_legend=1 00:34:47.073 --rc geninfo_all_blocks=1 00:34:47.073 --rc geninfo_unexecuted_blocks=1 00:34:47.073 00:34:47.073 ' 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:47.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.073 --rc genhtml_branch_coverage=1 00:34:47.073 --rc genhtml_function_coverage=1 00:34:47.073 --rc genhtml_legend=1 00:34:47.073 --rc geninfo_all_blocks=1 00:34:47.073 --rc geninfo_unexecuted_blocks=1 00:34:47.073 00:34:47.073 ' 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.073 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:34:47.074 09:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:55.221 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.221 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:55.222 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:55.222 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:55.222 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:55.222 09:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:55.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:55.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:34:55.222 00:34:55.222 --- 10.0.0.2 ping statistics --- 00:34:55.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.222 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:55.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:55.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:34:55.222 00:34:55.222 --- 10.0.0.1 ping statistics --- 00:34:55.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.222 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=579854 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 579854 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 579854 ']' 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:55.222 09:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:55.222 [2024-11-19 09:52:41.280096] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:55.222 [2024-11-19 09:52:41.281208] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:34:55.222 [2024-11-19 09:52:41.281278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.222 [2024-11-19 09:52:41.380723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:55.223 [2024-11-19 09:52:41.435254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:55.223 [2024-11-19 09:52:41.435306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:55.223 [2024-11-19 09:52:41.435315] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:55.223 [2024-11-19 09:52:41.435322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:55.223 [2024-11-19 09:52:41.435329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:55.223 [2024-11-19 09:52:41.437676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.223 [2024-11-19 09:52:41.437838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:55.223 [2024-11-19 09:52:41.438000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.223 [2024-11-19 09:52:41.438000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:55.223 [2024-11-19 09:52:41.438354] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:55.484 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:55.484 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:34:55.484 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:55.484 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:55.484 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:55.484 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:55.484 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:34:55.484 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.484 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:55.484 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.484 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:34:55.484 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.484 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:55.484 [2024-11-19 09:52:42.207647] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:55.484 [2024-11-19 09:52:42.208211] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:55.484 [2024-11-19 09:52:42.208213] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:55.484 [2024-11-19 09:52:42.208398] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:55.484 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.484 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:55.484 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.484 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:55.484 [2024-11-19 09:52:42.218782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:55.746 Malloc0 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:55.746 [2024-11-19 09:52:42.291071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.746 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=580173 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=580176 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:55.747 { 00:34:55.747 "params": { 00:34:55.747 "name": "Nvme$subsystem", 00:34:55.747 "trtype": "$TEST_TRANSPORT", 00:34:55.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:55.747 "adrfam": "ipv4", 00:34:55.747 "trsvcid": "$NVMF_PORT", 00:34:55.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:55.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:55.747 "hdgst": ${hdgst:-false}, 00:34:55.747 "ddgst": ${ddgst:-false} 00:34:55.747 }, 00:34:55.747 "method": "bdev_nvme_attach_controller" 00:34:55.747 } 00:34:55.747 EOF 00:34:55.747 )") 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=580178 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:55.747 { 00:34:55.747 "params": { 00:34:55.747 "name": "Nvme$subsystem", 00:34:55.747 "trtype": "$TEST_TRANSPORT", 00:34:55.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:55.747 "adrfam": "ipv4", 00:34:55.747 "trsvcid": "$NVMF_PORT", 00:34:55.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:55.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:55.747 "hdgst": ${hdgst:-false}, 00:34:55.747 "ddgst": ${ddgst:-false} 00:34:55.747 }, 00:34:55.747 "method": "bdev_nvme_attach_controller" 00:34:55.747 } 00:34:55.747 EOF 00:34:55.747 )") 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=580182 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:55.747 { 00:34:55.747 "params": { 00:34:55.747 "name": "Nvme$subsystem", 00:34:55.747 "trtype": "$TEST_TRANSPORT", 00:34:55.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:55.747 "adrfam": "ipv4", 00:34:55.747 "trsvcid": "$NVMF_PORT", 00:34:55.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:55.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:55.747 "hdgst": ${hdgst:-false}, 00:34:55.747 "ddgst": ${ddgst:-false} 00:34:55.747 }, 00:34:55.747 "method": "bdev_nvme_attach_controller" 00:34:55.747 } 00:34:55.747 EOF 00:34:55.747 )") 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:55.747 { 00:34:55.747 "params": { 00:34:55.747 "name": "Nvme$subsystem", 00:34:55.747 "trtype": "$TEST_TRANSPORT", 00:34:55.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:55.747 "adrfam": "ipv4", 00:34:55.747 "trsvcid": "$NVMF_PORT", 00:34:55.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:55.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:55.747 "hdgst": ${hdgst:-false}, 00:34:55.747 "ddgst": ${ddgst:-false} 00:34:55.747 }, 00:34:55.747 "method": "bdev_nvme_attach_controller" 00:34:55.747 } 00:34:55.747 EOF 00:34:55.747 )") 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 580173 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:55.747 "params": { 00:34:55.747 "name": "Nvme1", 00:34:55.747 "trtype": "tcp", 00:34:55.747 "traddr": "10.0.0.2", 00:34:55.747 "adrfam": "ipv4", 00:34:55.747 "trsvcid": "4420", 00:34:55.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:55.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:55.747 "hdgst": false, 00:34:55.747 "ddgst": false 00:34:55.747 }, 00:34:55.747 "method": "bdev_nvme_attach_controller" 00:34:55.747 }' 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:55.747 "params": { 00:34:55.747 "name": "Nvme1", 00:34:55.747 "trtype": "tcp", 00:34:55.747 "traddr": "10.0.0.2", 00:34:55.747 "adrfam": "ipv4", 00:34:55.747 "trsvcid": "4420", 00:34:55.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:55.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:55.747 "hdgst": false, 00:34:55.747 "ddgst": false 00:34:55.747 }, 00:34:55.747 "method": "bdev_nvme_attach_controller" 00:34:55.747 }' 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:55.747 "params": { 00:34:55.747 "name": "Nvme1", 00:34:55.747 "trtype": "tcp", 00:34:55.747 "traddr": "10.0.0.2", 00:34:55.747 "adrfam": "ipv4", 00:34:55.747 "trsvcid": "4420", 00:34:55.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:55.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:55.747 "hdgst": false, 00:34:55.747 "ddgst": false 00:34:55.747 }, 00:34:55.747 "method": "bdev_nvme_attach_controller" 00:34:55.747 }' 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:55.747 09:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:55.747 "params": { 00:34:55.747 "name": "Nvme1", 00:34:55.747 "trtype": "tcp", 00:34:55.747 "traddr": "10.0.0.2", 00:34:55.747 "adrfam": "ipv4", 00:34:55.747 "trsvcid": "4420", 00:34:55.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:55.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:55.747 "hdgst": false, 00:34:55.747 "ddgst": false 00:34:55.747 }, 00:34:55.747 "method": "bdev_nvme_attach_controller" 00:34:55.747 }' 00:34:55.747 [2024-11-19 09:52:42.349198] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:34:55.747 [2024-11-19 09:52:42.349284] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:55.747 [2024-11-19 09:52:42.353286] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:34:55.747 [2024-11-19 09:52:42.353349] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:34:55.747 [2024-11-19 09:52:42.354205] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:34:55.748 [2024-11-19 09:52:42.354263] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:55.748 [2024-11-19 09:52:42.362385] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:34:55.748 [2024-11-19 09:52:42.362455] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:34:56.011 [2024-11-19 09:52:42.569574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.011 [2024-11-19 09:52:42.612492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:56.011 [2024-11-19 09:52:42.633551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.011 [2024-11-19 09:52:42.671129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:56.011 [2024-11-19 09:52:42.704347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.011 [2024-11-19 09:52:42.742119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:56.273 [2024-11-19 09:52:42.795551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.273 [2024-11-19 09:52:42.836402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:56.273 Running I/O for 1 seconds... 00:34:56.534 Running I/O for 1 seconds... 00:34:56.534 Running I/O for 1 seconds... 00:34:56.534 Running I/O for 1 seconds... 00:34:57.477 7590.00 IOPS, 29.65 MiB/s 00:34:57.477 Latency(us) 00:34:57.477 [2024-11-19T08:52:44.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.477 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:34:57.477 Nvme1n1 : 1.02 7610.31 29.73 0.00 0.00 16684.94 3112.96 22937.60 00:34:57.477 [2024-11-19T08:52:44.225Z] =================================================================================================================== 00:34:57.477 [2024-11-19T08:52:44.225Z] Total : 7610.31 29.73 0.00 0.00 16684.94 3112.96 22937.60 00:34:57.477 10920.00 IOPS, 42.66 MiB/s [2024-11-19T08:52:44.225Z] 7021.00 IOPS, 27.43 MiB/s 00:34:57.477 Latency(us) 00:34:57.477 [2024-11-19T08:52:44.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.477 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:34:57.477 Nvme1n1 : 1.01 7106.32 27.76 0.00 0.00 17951.87 5242.88 33204.91 00:34:57.477 [2024-11-19T08:52:44.225Z] =================================================================================================================== 00:34:57.477 [2024-11-19T08:52:44.225Z] Total : 7106.32 27.76 0.00 0.00 17951.87 5242.88 33204.91 00:34:57.477 00:34:57.477 Latency(us) 00:34:57.477 [2024-11-19T08:52:44.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.478 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:34:57.478 Nvme1n1 : 1.01 10983.20 42.90 0.00 0.00 11612.38 2184.53 18022.40 00:34:57.478 [2024-11-19T08:52:44.226Z] =================================================================================================================== 00:34:57.478 [2024-11-19T08:52:44.226Z] Total : 10983.20 42.90 0.00 0.00 11612.38 2184.53 18022.40 00:34:57.478 185120.00 IOPS, 723.12 MiB/s 00:34:57.478 Latency(us) 00:34:57.478 [2024-11-19T08:52:44.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.478 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:34:57.478 Nvme1n1 : 1.00 184752.21 721.69 0.00 0.00 689.04 302.08 1966.08 00:34:57.478 [2024-11-19T08:52:44.226Z] =================================================================================================================== 00:34:57.478 [2024-11-19T08:52:44.226Z] Total : 184752.21 721.69 0.00 0.00 689.04 302.08 1966.08 00:34:57.478 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 580176 00:34:57.478 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 580178 00:34:57.478 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 580182 00:34:57.478 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:57.478 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.478 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:57.738 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:57.739 rmmod nvme_tcp 00:34:57.739 rmmod nvme_fabrics 00:34:57.739 rmmod nvme_keyring 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 579854 ']' 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 579854 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 579854 ']' 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 579854 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 579854 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 579854' 00:34:57.739 killing process with pid 579854 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 579854 00:34:57.739 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 579854 00:34:58.000 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:58.000 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:58.000 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:58.000 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:34:58.000 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:34:58.000 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:58.000 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:34:58.000 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:58.000 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:58.000 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:58.000 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:58.000 09:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.915 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:59.915 00:34:59.915 real 0m13.038s 00:34:59.915 user 0m16.113s 00:34:59.915 sys 0m7.742s 00:34:59.915 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:59.915 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:59.915 ************************************ 00:34:59.915 END TEST nvmf_bdev_io_wait 00:34:59.915 ************************************ 00:34:59.915 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:59.915 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:59.915 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:59.915 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:00.177 ************************************ 00:35:00.177 START TEST nvmf_queue_depth 00:35:00.177 ************************************ 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:00.177 * Looking for test storage... 00:35:00.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:00.177 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:00.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.178 --rc genhtml_branch_coverage=1 00:35:00.178 --rc genhtml_function_coverage=1 00:35:00.178 --rc genhtml_legend=1 00:35:00.178 --rc geninfo_all_blocks=1 00:35:00.178 --rc geninfo_unexecuted_blocks=1 00:35:00.178 00:35:00.178 ' 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:00.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.178 --rc genhtml_branch_coverage=1 00:35:00.178 --rc genhtml_function_coverage=1 00:35:00.178 --rc genhtml_legend=1 00:35:00.178 --rc geninfo_all_blocks=1 00:35:00.178 --rc geninfo_unexecuted_blocks=1 00:35:00.178 00:35:00.178 ' 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:00.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.178 --rc genhtml_branch_coverage=1 00:35:00.178 --rc genhtml_function_coverage=1 00:35:00.178 --rc genhtml_legend=1 00:35:00.178 --rc geninfo_all_blocks=1 00:35:00.178 --rc geninfo_unexecuted_blocks=1 00:35:00.178 00:35:00.178 ' 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:00.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.178 --rc genhtml_branch_coverage=1 00:35:00.178 --rc genhtml_function_coverage=1 00:35:00.178 --rc genhtml_legend=1 00:35:00.178 --rc geninfo_all_blocks=1 00:35:00.178 --rc geninfo_unexecuted_blocks=1 00:35:00.178 00:35:00.178 ' 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:00.178 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:00.441 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:35:00.441 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:35:00.441 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:00.441 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:35:00.441 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:00.441 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:00.441 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:00.441 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:00.441 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:00.441 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:00.441 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:00.441 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.441 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:00.441 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:00.441 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:35:00.441 09:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:08.582 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:08.583 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:08.583 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:08.583 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:08.583 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:08.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:08.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:35:08.583 00:35:08.583 --- 10.0.0.2 ping statistics --- 00:35:08.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.583 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:08.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:08.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:35:08.583 00:35:08.583 --- 10.0.0.1 ping statistics --- 00:35:08.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.583 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=584570 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 584570 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:08.583 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 584570 ']' 00:35:08.584 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.584 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:08.584 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.584 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:08.584 09:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:08.584 [2024-11-19 09:52:54.438995] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:08.584 [2024-11-19 09:52:54.440126] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:35:08.584 [2024-11-19 09:52:54.440182] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:08.584 [2024-11-19 09:52:54.543040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.584 [2024-11-19 09:52:54.593278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:08.584 [2024-11-19 09:52:54.593327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:08.584 [2024-11-19 09:52:54.593335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:08.584 [2024-11-19 09:52:54.593343] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:08.584 [2024-11-19 09:52:54.593349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:08.584 [2024-11-19 09:52:54.594085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.584 [2024-11-19 09:52:54.671797] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:08.584 [2024-11-19 09:52:54.672085] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:08.584 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:08.584 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:35:08.584 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:08.584 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:08.584 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:08.584 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:08.584 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:08.584 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.584 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:08.584 [2024-11-19 09:52:55.318931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:08.846 Malloc0 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:08.846 [2024-11-19 09:52:55.399068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=584917 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 584917 /var/tmp/bdevperf.sock 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 584917 ']' 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:08.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:08.846 09:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:08.846 [2024-11-19 09:52:55.457129] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:35:08.846 [2024-11-19 09:52:55.457203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid584917 ] 00:35:08.846 [2024-11-19 09:52:55.532796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.846 [2024-11-19 09:52:55.585568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:09.790 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:09.790 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:35:09.790 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:09.790 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.790 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:09.790 NVMe0n1 00:35:09.790 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.790 09:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:10.052 Running I/O for 10 seconds... 00:35:11.941 8750.00 IOPS, 34.18 MiB/s [2024-11-19T08:52:59.631Z] 8885.50 IOPS, 34.71 MiB/s [2024-11-19T08:53:01.019Z] 9306.33 IOPS, 36.35 MiB/s [2024-11-19T08:53:01.962Z] 10239.75 IOPS, 40.00 MiB/s [2024-11-19T08:53:02.905Z] 10867.60 IOPS, 42.45 MiB/s [2024-11-19T08:53:03.847Z] 11345.83 IOPS, 44.32 MiB/s [2024-11-19T08:53:04.789Z] 11705.14 IOPS, 45.72 MiB/s [2024-11-19T08:53:05.730Z] 11923.25 IOPS, 46.58 MiB/s [2024-11-19T08:53:06.672Z] 12143.22 IOPS, 47.43 MiB/s [2024-11-19T08:53:06.672Z] 12294.20 IOPS, 48.02 MiB/s 00:35:19.924 Latency(us) 00:35:19.924 [2024-11-19T08:53:06.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.924 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:35:19.924 Verification LBA range: start 0x0 length 0x4000 00:35:19.924 NVMe0n1 : 10.05 12332.52 48.17 0.00 0.00 82765.65 17913.17 73837.23 00:35:19.924 [2024-11-19T08:53:06.672Z] =================================================================================================================== 00:35:19.924 [2024-11-19T08:53:06.672Z] Total : 12332.52 48.17 0.00 0.00 82765.65 17913.17 73837.23 00:35:19.924 { 00:35:19.924 "results": [ 00:35:19.924 { 00:35:19.924 "job": "NVMe0n1", 00:35:19.924 "core_mask": "0x1", 00:35:19.924 "workload": "verify", 00:35:19.924 "status": "finished", 00:35:19.924 "verify_range": { 00:35:19.924 "start": 0, 00:35:19.924 "length": 16384 00:35:19.924 }, 00:35:19.924 "queue_depth": 1024, 00:35:19.924 "io_size": 4096, 00:35:19.924 "runtime": 10.051958, 00:35:19.924 "iops": 12332.522678666186, 00:35:19.924 "mibps": 48.17391671353979, 00:35:19.924 "io_failed": 0, 00:35:19.924 "io_timeout": 0, 00:35:19.924 "avg_latency_us": 82765.65464121883, 00:35:19.925 "min_latency_us": 17913.173333333332, 00:35:19.925 "max_latency_us": 73837.22666666667 00:35:19.925 } 00:35:19.925 ], 00:35:19.925 "core_count": 1 00:35:19.925 } 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 584917 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 584917 ']' 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 584917 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 584917 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 584917' 00:35:20.185 killing process with pid 584917 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 584917 00:35:20.185 Received shutdown signal, test time was about 10.000000 seconds 00:35:20.185 00:35:20.185 Latency(us) 00:35:20.185 [2024-11-19T08:53:06.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.185 [2024-11-19T08:53:06.933Z] =================================================================================================================== 00:35:20.185 [2024-11-19T08:53:06.933Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 584917 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:20.185 rmmod nvme_tcp 00:35:20.185 rmmod nvme_fabrics 00:35:20.185 rmmod nvme_keyring 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 584570 ']' 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 584570 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 584570 ']' 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 584570 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:20.185 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 584570 00:35:20.446 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:20.446 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:20.446 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 584570' 00:35:20.446 killing process with pid 584570 00:35:20.446 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 584570 00:35:20.446 09:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 584570 00:35:20.446 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:20.446 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:20.447 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:20.447 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:35:20.447 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:35:20.447 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:20.447 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:35:20.447 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:20.447 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:20.447 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:20.447 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:20.447 09:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:22.993 00:35:22.993 real 0m22.489s 00:35:22.993 user 0m24.797s 00:35:22.993 sys 0m7.397s 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:22.993 ************************************ 00:35:22.993 END TEST nvmf_queue_depth 00:35:22.993 ************************************ 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:22.993 ************************************ 00:35:22.993 START TEST nvmf_target_multipath 00:35:22.993 ************************************ 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:22.993 * Looking for test storage... 00:35:22.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:22.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.993 --rc genhtml_branch_coverage=1 00:35:22.993 --rc genhtml_function_coverage=1 00:35:22.993 --rc genhtml_legend=1 00:35:22.993 --rc geninfo_all_blocks=1 00:35:22.993 --rc geninfo_unexecuted_blocks=1 00:35:22.993 00:35:22.993 ' 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:22.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.993 --rc genhtml_branch_coverage=1 00:35:22.993 --rc genhtml_function_coverage=1 00:35:22.993 --rc genhtml_legend=1 00:35:22.993 --rc geninfo_all_blocks=1 00:35:22.993 --rc geninfo_unexecuted_blocks=1 00:35:22.993 00:35:22.993 ' 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:22.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.993 --rc genhtml_branch_coverage=1 00:35:22.993 --rc genhtml_function_coverage=1 00:35:22.993 --rc genhtml_legend=1 00:35:22.993 --rc geninfo_all_blocks=1 00:35:22.993 --rc geninfo_unexecuted_blocks=1 00:35:22.993 00:35:22.993 ' 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:22.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.993 --rc genhtml_branch_coverage=1 00:35:22.993 --rc genhtml_function_coverage=1 00:35:22.993 --rc genhtml_legend=1 00:35:22.993 --rc geninfo_all_blocks=1 00:35:22.993 --rc geninfo_unexecuted_blocks=1 00:35:22.993 00:35:22.993 ' 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:22.993 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:35:22.994 09:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:31.137 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:31.137 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:35:31.137 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:31.137 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:31.137 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:31.137 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:31.137 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:31.137 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:35:31.137 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:31.137 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:35:31.137 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:35:31.137 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:35:31.137 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:35:31.137 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:31.138 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:31.138 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:31.138 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:31.138 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:31.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:31.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:35:31.138 00:35:31.138 --- 10.0.0.2 ping statistics --- 00:35:31.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.138 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:35:31.138 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:31.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:31.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:35:31.138 00:35:31.138 --- 10.0.0.1 ping statistics --- 00:35:31.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.138 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:35:31.139 only one NIC for nvmf test 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:31.139 rmmod nvme_tcp 00:35:31.139 rmmod nvme_fabrics 00:35:31.139 rmmod nvme_keyring 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:31.139 09:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:32.525 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.526 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:32.526 00:35:32.526 real 0m9.631s 00:35:32.526 user 0m2.161s 00:35:32.526 sys 0m5.419s 00:35:32.526 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.526 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:32.526 ************************************ 00:35:32.526 END TEST nvmf_target_multipath 00:35:32.526 ************************************ 00:35:32.526 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:32.526 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:32.526 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:32.526 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:32.526 ************************************ 00:35:32.526 START TEST nvmf_zcopy 00:35:32.526 ************************************ 00:35:32.526 09:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:32.526 * Looking for test storage... 00:35:32.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:32.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.526 --rc genhtml_branch_coverage=1 00:35:32.526 --rc genhtml_function_coverage=1 00:35:32.526 --rc genhtml_legend=1 00:35:32.526 --rc geninfo_all_blocks=1 00:35:32.526 --rc geninfo_unexecuted_blocks=1 00:35:32.526 00:35:32.526 ' 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:32.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.526 --rc genhtml_branch_coverage=1 00:35:32.526 --rc genhtml_function_coverage=1 00:35:32.526 --rc genhtml_legend=1 00:35:32.526 --rc geninfo_all_blocks=1 00:35:32.526 --rc geninfo_unexecuted_blocks=1 00:35:32.526 00:35:32.526 ' 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:32.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.526 --rc genhtml_branch_coverage=1 00:35:32.526 --rc genhtml_function_coverage=1 00:35:32.526 --rc genhtml_legend=1 00:35:32.526 --rc geninfo_all_blocks=1 00:35:32.526 --rc geninfo_unexecuted_blocks=1 00:35:32.526 00:35:32.526 ' 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:32.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.526 --rc genhtml_branch_coverage=1 00:35:32.526 --rc genhtml_function_coverage=1 00:35:32.526 --rc genhtml_legend=1 00:35:32.526 --rc geninfo_all_blocks=1 00:35:32.526 --rc geninfo_unexecuted_blocks=1 00:35:32.526 00:35:32.526 ' 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.526 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:35:32.527 09:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:40.675 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:40.675 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:40.676 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:40.676 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:40.676 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:40.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:40.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:35:40.676 00:35:40.676 --- 10.0.0.2 ping statistics --- 00:35:40.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:40.676 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:40.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:40.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:35:40.676 00:35:40.676 --- 10.0.0.1 ping statistics --- 00:35:40.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:40.676 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=595255 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 595255 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 595255 ']' 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:40.676 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:40.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:40.677 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:40.677 09:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:40.677 [2024-11-19 09:53:26.574276] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:40.677 [2024-11-19 09:53:26.575406] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:35:40.677 [2024-11-19 09:53:26.575456] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:40.677 [2024-11-19 09:53:26.674253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.677 [2024-11-19 09:53:26.709892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:40.677 [2024-11-19 09:53:26.709921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:40.677 [2024-11-19 09:53:26.709930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:40.677 [2024-11-19 09:53:26.709937] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:40.677 [2024-11-19 09:53:26.709943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:40.677 [2024-11-19 09:53:26.710508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.677 [2024-11-19 09:53:26.764809] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:40.677 [2024-11-19 09:53:26.765063] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:40.677 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:40.677 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:35:40.677 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:40.677 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:40.677 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:40.938 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:40.938 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:35:40.938 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:40.939 [2024-11-19 09:53:27.431272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:40.939 [2024-11-19 09:53:27.459572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:40.939 malloc0 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:40.939 { 00:35:40.939 "params": { 00:35:40.939 "name": "Nvme$subsystem", 00:35:40.939 "trtype": "$TEST_TRANSPORT", 00:35:40.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.939 "adrfam": "ipv4", 00:35:40.939 "trsvcid": "$NVMF_PORT", 00:35:40.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.939 "hdgst": ${hdgst:-false}, 00:35:40.939 "ddgst": ${ddgst:-false} 00:35:40.939 }, 00:35:40.939 "method": "bdev_nvme_attach_controller" 00:35:40.939 } 00:35:40.939 EOF 00:35:40.939 )") 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:40.939 09:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:40.939 "params": { 00:35:40.939 "name": "Nvme1", 00:35:40.939 "trtype": "tcp", 00:35:40.939 "traddr": "10.0.0.2", 00:35:40.939 "adrfam": "ipv4", 00:35:40.939 "trsvcid": "4420", 00:35:40.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:40.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:40.939 "hdgst": false, 00:35:40.939 "ddgst": false 00:35:40.939 }, 00:35:40.939 "method": "bdev_nvme_attach_controller" 00:35:40.939 }' 00:35:40.939 [2024-11-19 09:53:27.563976] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:35:40.939 [2024-11-19 09:53:27.564041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid595380 ] 00:35:40.939 [2024-11-19 09:53:27.655472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.201 [2024-11-19 09:53:27.708890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:41.201 Running I/O for 10 seconds... 00:35:43.175 6591.00 IOPS, 51.49 MiB/s [2024-11-19T08:53:31.310Z] 6613.50 IOPS, 51.67 MiB/s [2024-11-19T08:53:32.252Z] 6607.33 IOPS, 51.62 MiB/s [2024-11-19T08:53:33.195Z] 6754.75 IOPS, 52.77 MiB/s [2024-11-19T08:53:34.137Z] 7345.60 IOPS, 57.39 MiB/s [2024-11-19T08:53:35.078Z] 7733.33 IOPS, 60.42 MiB/s [2024-11-19T08:53:36.019Z] 8008.86 IOPS, 62.57 MiB/s [2024-11-19T08:53:36.962Z] 8214.12 IOPS, 64.17 MiB/s [2024-11-19T08:53:38.361Z] 8374.22 IOPS, 65.42 MiB/s [2024-11-19T08:53:38.361Z] 8502.80 IOPS, 66.43 MiB/s 00:35:51.613 Latency(us) 00:35:51.613 [2024-11-19T08:53:38.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.613 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:35:51.613 Verification LBA range: start 0x0 length 0x1000 00:35:51.613 Nvme1n1 : 10.01 8506.98 66.46 0.00 0.00 15001.36 1897.81 28398.93 00:35:51.613 [2024-11-19T08:53:38.361Z] =================================================================================================================== 00:35:51.613 [2024-11-19T08:53:38.361Z] Total : 8506.98 66.46 0.00 0.00 15001.36 1897.81 28398.93 00:35:51.613 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=597308 00:35:51.613 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:35:51.613 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:51.613 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:35:51.613 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:35:51.613 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:51.613 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:51.613 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:51.613 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:51.613 { 00:35:51.613 "params": { 00:35:51.613 "name": "Nvme$subsystem", 00:35:51.613 "trtype": "$TEST_TRANSPORT", 00:35:51.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:51.613 "adrfam": "ipv4", 00:35:51.613 "trsvcid": "$NVMF_PORT", 00:35:51.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:51.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:51.613 "hdgst": ${hdgst:-false}, 00:35:51.613 "ddgst": ${ddgst:-false} 00:35:51.613 }, 00:35:51.613 "method": "bdev_nvme_attach_controller" 00:35:51.613 } 00:35:51.613 EOF 00:35:51.613 )") 00:35:51.613 [2024-11-19 09:53:38.050821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.613 [2024-11-19 09:53:38.050847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.613 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:51.613 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:51.613 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:51.613 09:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:51.613 "params": { 00:35:51.613 "name": "Nvme1", 00:35:51.613 "trtype": "tcp", 00:35:51.613 "traddr": "10.0.0.2", 00:35:51.613 "adrfam": "ipv4", 00:35:51.613 "trsvcid": "4420", 00:35:51.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:51.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:51.613 "hdgst": false, 00:35:51.613 "ddgst": false 00:35:51.613 }, 00:35:51.613 "method": "bdev_nvme_attach_controller" 00:35:51.613 }' 00:35:51.613 [2024-11-19 09:53:38.062788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.613 [2024-11-19 09:53:38.062798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.613 [2024-11-19 09:53:38.074788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.613 [2024-11-19 09:53:38.074796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.613 [2024-11-19 09:53:38.086785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.613 [2024-11-19 09:53:38.086794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.613 [2024-11-19 09:53:38.094073] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:35:51.613 [2024-11-19 09:53:38.094119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597308 ] 00:35:51.613 [2024-11-19 09:53:38.098785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.613 [2024-11-19 09:53:38.098794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.613 [2024-11-19 09:53:38.110785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.613 [2024-11-19 09:53:38.110793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.613 [2024-11-19 09:53:38.122785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.613 [2024-11-19 09:53:38.122792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.613 [2024-11-19 09:53:38.134785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.613 [2024-11-19 09:53:38.134793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.613 [2024-11-19 09:53:38.146785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.613 [2024-11-19 09:53:38.146793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.614 [2024-11-19 09:53:38.158784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.614 [2024-11-19 09:53:38.158791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.614 [2024-11-19 09:53:38.170785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.614 [2024-11-19 09:53:38.170792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.614 [2024-11-19 09:53:38.175019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:51.614 [2024-11-19 09:53:38.182785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.614 [2024-11-19 09:53:38.182794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.614 [2024-11-19 09:53:38.194785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.614 [2024-11-19 09:53:38.194794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.614 [2024-11-19 09:53:38.204345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.614 [2024-11-19 09:53:38.206785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.614 [2024-11-19 09:53:38.206793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.614 [2024-11-19 09:53:38.218791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.614 [2024-11-19 09:53:38.218802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.614 [2024-11-19 09:53:38.230790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.614 [2024-11-19 09:53:38.230802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.614 [2024-11-19 09:53:38.242787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.614 [2024-11-19 09:53:38.242798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.614 [2024-11-19 09:53:38.254785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.614 [2024-11-19 09:53:38.254794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.614 [2024-11-19 09:53:38.266785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.614 [2024-11-19 09:53:38.266793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.614 [2024-11-19 09:53:38.278793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.614 [2024-11-19 09:53:38.278809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.614 [2024-11-19 09:53:38.290786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.614 [2024-11-19 09:53:38.290796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.614 [2024-11-19 09:53:38.302789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.614 [2024-11-19 09:53:38.302801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.614 [2024-11-19 09:53:38.314787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.614 [2024-11-19 09:53:38.314797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.614 [2024-11-19 09:53:38.326792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.614 [2024-11-19 09:53:38.326806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.614 Running I/O for 5 seconds... 00:35:51.614 [2024-11-19 09:53:38.338788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.614 [2024-11-19 09:53:38.338801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.614 [2024-11-19 09:53:38.353924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.614 [2024-11-19 09:53:38.353940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.875 [2024-11-19 09:53:38.367221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.875 [2024-11-19 09:53:38.367236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.875 [2024-11-19 09:53:38.382084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.875 [2024-11-19 09:53:38.382100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.875 [2024-11-19 09:53:38.395106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.875 [2024-11-19 09:53:38.395120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.875 [2024-11-19 09:53:38.410265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.875 [2024-11-19 09:53:38.410279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.875 [2024-11-19 09:53:38.423429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.875 [2024-11-19 09:53:38.423444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.875 [2024-11-19 09:53:38.438491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.875 [2024-11-19 09:53:38.438511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.875 [2024-11-19 09:53:38.451206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.875 [2024-11-19 09:53:38.451220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.875 [2024-11-19 09:53:38.466641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.875 [2024-11-19 09:53:38.466656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.875 [2024-11-19 09:53:38.479677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.875 [2024-11-19 09:53:38.479692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.875 [2024-11-19 09:53:38.494336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.875 [2024-11-19 09:53:38.494351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.876 [2024-11-19 09:53:38.507711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.876 [2024-11-19 09:53:38.507726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.876 [2024-11-19 09:53:38.522102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.876 [2024-11-19 09:53:38.522118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.876 [2024-11-19 09:53:38.535037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.876 [2024-11-19 09:53:38.535052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.876 [2024-11-19 09:53:38.548205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.876 [2024-11-19 09:53:38.548220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.876 [2024-11-19 09:53:38.562167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.876 [2024-11-19 09:53:38.562182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.876 [2024-11-19 09:53:38.575339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.876 [2024-11-19 09:53:38.575353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.876 [2024-11-19 09:53:38.589742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.876 [2024-11-19 09:53:38.589756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.876 [2024-11-19 09:53:38.602815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.876 [2024-11-19 09:53:38.602830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:51.876 [2024-11-19 09:53:38.616120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:51.876 [2024-11-19 09:53:38.616134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.629709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.629724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.643192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.643207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.657849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.657863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.670705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.670720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.684249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.684263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.697814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.697829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.711157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.711176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.726155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.726175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.738861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.738876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.751957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.751971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.766131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.766146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.778938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.778953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.792102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.792116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.806265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.806279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.819231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.819245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.834044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.834058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.847170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.847183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.861933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.861948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.137 [2024-11-19 09:53:38.874799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.137 [2024-11-19 09:53:38.874813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:38.887498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:38.887512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:38.902005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:38.902019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:38.915186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:38.915200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:38.929749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:38.929763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:38.942679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:38.942693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:38.956402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:38.956416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:38.970517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:38.970532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:38.983531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:38.983546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:38.998410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:38.998424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:39.011715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:39.011729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:39.025949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:39.025964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:39.039043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:39.039057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:39.051981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:39.051996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:39.066170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:39.066184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:39.078999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:39.079014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:39.091946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:39.091960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:39.105925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:39.105940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:39.118833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:39.118847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.398 [2024-11-19 09:53:39.131733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.398 [2024-11-19 09:53:39.131747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.145798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.145813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.158716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.158730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.171657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.171671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.185846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.185860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.198806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.198820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.211898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.211913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.225779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.225794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.238727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.238743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.251271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.251285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.265862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.265876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.278559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.278573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.290993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.291007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.303912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.303926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.317903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.317918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.331181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.331194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 19046.00 IOPS, 148.80 MiB/s [2024-11-19T08:53:39.411Z] [2024-11-19 09:53:39.345838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.345852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.358933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.358948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.371961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.371975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.385806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.385821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.663 [2024-11-19 09:53:39.398910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.663 [2024-11-19 09:53:39.398924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.926 [2024-11-19 09:53:39.411459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.926 [2024-11-19 09:53:39.411473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.926 [2024-11-19 09:53:39.425666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.926 [2024-11-19 09:53:39.425680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.926 [2024-11-19 09:53:39.438729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.926 [2024-11-19 09:53:39.438745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.926 [2024-11-19 09:53:39.452043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.926 [2024-11-19 09:53:39.452060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.926 [2024-11-19 09:53:39.465646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.926 [2024-11-19 09:53:39.465660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.926 [2024-11-19 09:53:39.478662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.926 [2024-11-19 09:53:39.478677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.926 [2024-11-19 09:53:39.491350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.926 [2024-11-19 09:53:39.491364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.927 [2024-11-19 09:53:39.505576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.927 [2024-11-19 09:53:39.505590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.927 [2024-11-19 09:53:39.518700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.927 [2024-11-19 09:53:39.518715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.927 [2024-11-19 09:53:39.532098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.927 [2024-11-19 09:53:39.532112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.927 [2024-11-19 09:53:39.545900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.927 [2024-11-19 09:53:39.545915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.927 [2024-11-19 09:53:39.558963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.927 [2024-11-19 09:53:39.558978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.927 [2024-11-19 09:53:39.572049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.927 [2024-11-19 09:53:39.572063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.927 [2024-11-19 09:53:39.586165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.927 [2024-11-19 09:53:39.586180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.927 [2024-11-19 09:53:39.599178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.927 [2024-11-19 09:53:39.599192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.927 [2024-11-19 09:53:39.613510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.927 [2024-11-19 09:53:39.613524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.927 [2024-11-19 09:53:39.626233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.927 [2024-11-19 09:53:39.626248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.927 [2024-11-19 09:53:39.639278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.927 [2024-11-19 09:53:39.639292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.927 [2024-11-19 09:53:39.653584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.927 [2024-11-19 09:53:39.653598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:52.927 [2024-11-19 09:53:39.666445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:52.927 [2024-11-19 09:53:39.666459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.187 [2024-11-19 09:53:39.679262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.187 [2024-11-19 09:53:39.679276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.187 [2024-11-19 09:53:39.694097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.187 [2024-11-19 09:53:39.694111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.187 [2024-11-19 09:53:39.707339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.187 [2024-11-19 09:53:39.707357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.187 [2024-11-19 09:53:39.721987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.187 [2024-11-19 09:53:39.722002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.187 [2024-11-19 09:53:39.735016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.187 [2024-11-19 09:53:39.735031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.187 [2024-11-19 09:53:39.747475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.187 [2024-11-19 09:53:39.747490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.187 [2024-11-19 09:53:39.762399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.187 [2024-11-19 09:53:39.762413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.187 [2024-11-19 09:53:39.775502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.187 [2024-11-19 09:53:39.775516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.187 [2024-11-19 09:53:39.789952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.187 [2024-11-19 09:53:39.789967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.187 [2024-11-19 09:53:39.802805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.187 [2024-11-19 09:53:39.802819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.187 [2024-11-19 09:53:39.815615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.187 [2024-11-19 09:53:39.815630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.187 [2024-11-19 09:53:39.830549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.187 [2024-11-19 09:53:39.830564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.187 [2024-11-19 09:53:39.843607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.187 [2024-11-19 09:53:39.843621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.187 [2024-11-19 09:53:39.858050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.187 [2024-11-19 09:53:39.858065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.187 [2024-11-19 09:53:39.871128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.187 [2024-11-19 09:53:39.871142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.187 [2024-11-19 09:53:39.885848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.187 [2024-11-19 09:53:39.885862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.187 [2024-11-19 09:53:39.899007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.187 [2024-11-19 09:53:39.899022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.188 [2024-11-19 09:53:39.911674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.188 [2024-11-19 09:53:39.911689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.188 [2024-11-19 09:53:39.926009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.188 [2024-11-19 09:53:39.926023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:39.939405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:39.939420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:39.953555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:39.953570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:39.966692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:39.966711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:39.979627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:39.979642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:39.994014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:39.994029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:40.007860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:40.007875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:40.022080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:40.022096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:40.035457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:40.035472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:40.049882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:40.049897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:40.062873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:40.062888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:40.075567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:40.075581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:40.086838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:40.086853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:40.100102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:40.100117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:40.109487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:40.109501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:40.122362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:40.122377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:40.135685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:40.135699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:40.147830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:40.147845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:40.161689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:40.161704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:40.174670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:40.174684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.449 [2024-11-19 09:53:40.187578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.449 [2024-11-19 09:53:40.187592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.202083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.202098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.215244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.215262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.230173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.230187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.242966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.242980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.256045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.256059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.270504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.270518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.283540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.283554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.297704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.297719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.310928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.310943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.323655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.323669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.337892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.337907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 19093.50 IOPS, 149.17 MiB/s [2024-11-19T08:53:40.458Z] [2024-11-19 09:53:40.350951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.350965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.363435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.363449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.378045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.378059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.390817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.390831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.403563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.403577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.418679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.418694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.431684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.431698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.710 [2024-11-19 09:53:40.446172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.710 [2024-11-19 09:53:40.446187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.459358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.459372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.474251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.474265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.487362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.487376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.501935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.501950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.515162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.515177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.529650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.529665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.542901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.542916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.556091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.556107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.570027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.570042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.583058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.583074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.595308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.595323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.609971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.609986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.622948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.622962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.635550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.635564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.650108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.650123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.663320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.663334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.677546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.677561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.690513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.690528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:53.972 [2024-11-19 09:53:40.703350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:53.972 [2024-11-19 09:53:40.703365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.717954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.717969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.731168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.731183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.745808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.745823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.759078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.759093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.774013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.774028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.786947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.786962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.799899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.799914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.813866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.813880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.826936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.826951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.839612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.839626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.853717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.853732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.866493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.866508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.879950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.879965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.894127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.894142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.907188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.907202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.921731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.921746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.934578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.934593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.947269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.947283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.961892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.961907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.234 [2024-11-19 09:53:40.974871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.234 [2024-11-19 09:53:40.974889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.495 [2024-11-19 09:53:40.987729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.495 [2024-11-19 09:53:40.987744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.495 [2024-11-19 09:53:41.001955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.495 [2024-11-19 09:53:41.001970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.495 [2024-11-19 09:53:41.015076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.495 [2024-11-19 09:53:41.015090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.495 [2024-11-19 09:53:41.029550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.495 [2024-11-19 09:53:41.029565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.495 [2024-11-19 09:53:41.042778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.495 [2024-11-19 09:53:41.042793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.495 [2024-11-19 09:53:41.055919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.495 [2024-11-19 09:53:41.055934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.495 [2024-11-19 09:53:41.070089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.495 [2024-11-19 09:53:41.070104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.495 [2024-11-19 09:53:41.082995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.495 [2024-11-19 09:53:41.083011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.495 [2024-11-19 09:53:41.095430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.495 [2024-11-19 09:53:41.095445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.496 [2024-11-19 09:53:41.109980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.496 [2024-11-19 09:53:41.109996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.496 [2024-11-19 09:53:41.123101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.496 [2024-11-19 09:53:41.123115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.496 [2024-11-19 09:53:41.137737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.496 [2024-11-19 09:53:41.137752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.496 [2024-11-19 09:53:41.150794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.496 [2024-11-19 09:53:41.150808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.496 [2024-11-19 09:53:41.164128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.496 [2024-11-19 09:53:41.164143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.496 [2024-11-19 09:53:41.177975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.496 [2024-11-19 09:53:41.177990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.496 [2024-11-19 09:53:41.191018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.496 [2024-11-19 09:53:41.191033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.496 [2024-11-19 09:53:41.203901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.496 [2024-11-19 09:53:41.203916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.496 [2024-11-19 09:53:41.217858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.496 [2024-11-19 09:53:41.217873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.496 [2024-11-19 09:53:41.230854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.496 [2024-11-19 09:53:41.230873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.244122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.244137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.257902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.257916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.270738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.270753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.284088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.284103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.297978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.297993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.310960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.310975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.323759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.323774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.338133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.338148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 19099.67 IOPS, 149.22 MiB/s [2024-11-19T08:53:41.504Z] [2024-11-19 09:53:41.351090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.351104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.366123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.366137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.379262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.379276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.394200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.394215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.407334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.407348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.422148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.422169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.434907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.434921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.448040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.448054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.462235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.462250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.475394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.475409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:54.756 [2024-11-19 09:53:41.489609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:54.756 [2024-11-19 09:53:41.489628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.502622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.502637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.515106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.515120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.530136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.530151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.543115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.543128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.557785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.557800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.570838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.570853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.583594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.583608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.598178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.598194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.611243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.611258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.625632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.625648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.638513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.638529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.651984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.652000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.665877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.665893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.678754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.678769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.691555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.691570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.705732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.705747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.718449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.718464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.730967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.730982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.743899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.743913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.018 [2024-11-19 09:53:41.757937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.018 [2024-11-19 09:53:41.757953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:41.770902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:41.770917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:41.783590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:41.783605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:41.798244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:41.798260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:41.811286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:41.811302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:41.826178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:41.826193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:41.839348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:41.839362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:41.853822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:41.853837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:41.866905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:41.866921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:41.879763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:41.879778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:41.893971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:41.893987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:41.906510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:41.906525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:41.919308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:41.919323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:41.933591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:41.933606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:41.946561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:41.946575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:41.959394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:41.959408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:41.974348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:41.974363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:41.987387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:41.987403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:42.001455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:42.001470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.280 [2024-11-19 09:53:42.014623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.280 [2024-11-19 09:53:42.014638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.027276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.027292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.041816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.041831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.054786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.054801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.067606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.067621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.081719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.081734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.094524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.094539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.107074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.107088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.121768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.121783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.134740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.134755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.147850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.147865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.162120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.162135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.175051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.175066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.187783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.187798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.202112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.202128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.214679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.214694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.227240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.227254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.241846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.241861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.255038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.255053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.268020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.268034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.543 [2024-11-19 09:53:42.281965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.543 [2024-11-19 09:53:42.281980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.295116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.295130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.310129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.310144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.322881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.322897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.335801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.335816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 19117.00 IOPS, 149.35 MiB/s [2024-11-19T08:53:42.553Z] [2024-11-19 09:53:42.350318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.350334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.363497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.363512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.377992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.378007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.390764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.390779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.403781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.403796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.418410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.418425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.431310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.431325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.446118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.446133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.459280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.459295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.474266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.474281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.487608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.487622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.502194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.502213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.515113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.515128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.530194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.530209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:55.805 [2024-11-19 09:53:42.542967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:55.805 [2024-11-19 09:53:42.542982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.555869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.555885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.570257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.570272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.583396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.583410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.598049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.598064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.610927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.610942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.623841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.623857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.637933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.637948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.651367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.651382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.665963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.665978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.678771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.678786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.692106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.692121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.706335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.706350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.719623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.719637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.733352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.733367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.746414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.746430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.759375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.759392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.773691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.773705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.786945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.786960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.066 [2024-11-19 09:53:42.799798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.066 [2024-11-19 09:53:42.799813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:42.813973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:42.813988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:42.826937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:42.826952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:42.839498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:42.839512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:42.854128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:42.854143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:42.867487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:42.867502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:42.882290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:42.882305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:42.895624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:42.895639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:42.910080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:42.910095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:42.922716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:42.922731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:42.935235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:42.935249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:42.950174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:42.950189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:42.963123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:42.963137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:42.978284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:42.978299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:42.991595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:42.991609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:43.005534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:43.005549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:43.018784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:43.018805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:43.031767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:43.031782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:43.046205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:43.046219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.328 [2024-11-19 09:53:43.059147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.328 [2024-11-19 09:53:43.059172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.589 [2024-11-19 09:53:43.073716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.073731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.086849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.086864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.099864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.099879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.113891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.113906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.126815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.126830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.139684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.139698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.153707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.153722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.166934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.166949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.179533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.179548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.193745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.193759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.206523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.206538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.219727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.219742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.233759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.233774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.246797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.246812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.259750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.259765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.273976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.273996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.287110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.287126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.302494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.302509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.315641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.315656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.590 [2024-11-19 09:53:43.329958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.590 [2024-11-19 09:53:43.329973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.851 [2024-11-19 09:53:43.342961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.851 [2024-11-19 09:53:43.342977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.851 19119.60 IOPS, 149.37 MiB/s [2024-11-19T08:53:43.599Z] [2024-11-19 09:53:43.354889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.851 [2024-11-19 09:53:43.354903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.851 00:35:56.851 Latency(us) 00:35:56.851 [2024-11-19T08:53:43.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.851 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:35:56.851 Nvme1n1 : 5.01 19121.98 149.39 0.00 0.00 6687.52 2826.24 11195.73 00:35:56.851 [2024-11-19T08:53:43.600Z] =================================================================================================================== 00:35:56.852 [2024-11-19T08:53:43.600Z] Total : 19121.98 149.39 0.00 0.00 6687.52 2826.24 11195.73 00:35:56.852 [2024-11-19 09:53:43.366789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.852 [2024-11-19 09:53:43.366804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.852 [2024-11-19 09:53:43.378801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.852 [2024-11-19 09:53:43.378816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.852 [2024-11-19 09:53:43.390791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.852 [2024-11-19 09:53:43.390805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.852 [2024-11-19 09:53:43.402793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.852 [2024-11-19 09:53:43.402805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.852 [2024-11-19 09:53:43.414788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.852 [2024-11-19 09:53:43.414798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.852 [2024-11-19 09:53:43.426785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.852 [2024-11-19 09:53:43.426794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.852 [2024-11-19 09:53:43.438789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.852 [2024-11-19 09:53:43.438800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.852 [2024-11-19 09:53:43.450787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:56.852 [2024-11-19 09:53:43.450798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:56.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (597308) - No such process 00:35:56.852 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 597308 00:35:56.852 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:56.852 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.852 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:56.852 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.852 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:56.852 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.852 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:56.852 delay0 00:35:56.852 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.852 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:35:56.852 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.852 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:56.852 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.852 09:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:35:56.852 [2024-11-19 09:53:43.581619] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:04.991 Initializing NVMe Controllers 00:36:04.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:04.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:04.991 Initialization complete. Launching workers. 00:36:04.991 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 4960 00:36:04.991 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5238, failed to submit 42 00:36:04.991 success 5061, unsuccessful 177, failed 0 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:04.991 rmmod nvme_tcp 00:36:04.991 rmmod nvme_fabrics 00:36:04.991 rmmod nvme_keyring 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 595255 ']' 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 595255 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 595255 ']' 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 595255 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 595255 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 595255' 00:36:04.991 killing process with pid 595255 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 595255 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 595255 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:04.991 09:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.440 09:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:06.440 00:36:06.440 real 0m33.785s 00:36:06.440 user 0m43.345s 00:36:06.440 sys 0m12.016s 00:36:06.440 09:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:06.440 09:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:06.440 ************************************ 00:36:06.440 END TEST nvmf_zcopy 00:36:06.440 ************************************ 00:36:06.440 09:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:06.440 09:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:06.440 09:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:06.440 09:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:06.440 ************************************ 00:36:06.440 START TEST nvmf_nmic 00:36:06.440 ************************************ 00:36:06.440 09:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:06.440 * Looking for test storage... 00:36:06.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:06.440 09:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:06.440 09:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:36:06.440 09:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:06.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.440 --rc genhtml_branch_coverage=1 00:36:06.440 --rc genhtml_function_coverage=1 00:36:06.440 --rc genhtml_legend=1 00:36:06.440 --rc geninfo_all_blocks=1 00:36:06.440 --rc geninfo_unexecuted_blocks=1 00:36:06.440 00:36:06.440 ' 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:06.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.440 --rc genhtml_branch_coverage=1 00:36:06.440 --rc genhtml_function_coverage=1 00:36:06.440 --rc genhtml_legend=1 00:36:06.440 --rc geninfo_all_blocks=1 00:36:06.440 --rc geninfo_unexecuted_blocks=1 00:36:06.440 00:36:06.440 ' 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:06.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.440 --rc genhtml_branch_coverage=1 00:36:06.440 --rc genhtml_function_coverage=1 00:36:06.440 --rc genhtml_legend=1 00:36:06.440 --rc geninfo_all_blocks=1 00:36:06.440 --rc geninfo_unexecuted_blocks=1 00:36:06.440 00:36:06.440 ' 00:36:06.440 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:06.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.440 --rc genhtml_branch_coverage=1 00:36:06.440 --rc genhtml_function_coverage=1 00:36:06.440 --rc genhtml_legend=1 00:36:06.440 --rc geninfo_all_blocks=1 00:36:06.440 --rc geninfo_unexecuted_blocks=1 00:36:06.440 00:36:06.440 ' 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:36:06.441 09:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:13.272 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:13.272 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:36:13.272 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:13.272 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:13.272 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:13.273 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:13.273 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:13.273 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:13.273 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:13.273 09:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:13.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:13.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:36:13.575 00:36:13.575 --- 10.0.0.2 ping statistics --- 00:36:13.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.575 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:13.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:13.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:36:13.575 00:36:13.575 --- 10.0.0.1 ping statistics --- 00:36:13.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.575 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=603993 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 603993 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 603993 ']' 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:13.575 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:13.576 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:13.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:13.576 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:13.576 09:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:13.872 [2024-11-19 09:54:00.340399] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:13.872 [2024-11-19 09:54:00.341522] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:13.872 [2024-11-19 09:54:00.341575] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:13.872 [2024-11-19 09:54:00.439779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:13.872 [2024-11-19 09:54:00.493357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:13.872 [2024-11-19 09:54:00.493412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:13.872 [2024-11-19 09:54:00.493420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:13.872 [2024-11-19 09:54:00.493427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:13.872 [2024-11-19 09:54:00.493433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:13.872 [2024-11-19 09:54:00.495462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:13.872 [2024-11-19 09:54:00.495622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:13.872 [2024-11-19 09:54:00.495787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:13.872 [2024-11-19 09:54:00.495788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:13.872 [2024-11-19 09:54:00.573014] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:13.872 [2024-11-19 09:54:00.573726] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:13.872 [2024-11-19 09:54:00.574088] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:13.872 [2024-11-19 09:54:00.574512] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:13.872 [2024-11-19 09:54:00.574553] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:14.479 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:14.479 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:36:14.479 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:14.479 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:14.479 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:14.479 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:14.479 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:14.479 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.479 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:14.479 [2024-11-19 09:54:01.172760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:14.479 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.479 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:14.479 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.479 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:14.742 Malloc0 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:14.742 [2024-11-19 09:54:01.260934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:36:14.742 test case1: single bdev can't be used in multiple subsystems 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:14.742 [2024-11-19 09:54:01.296382] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:36:14.742 [2024-11-19 09:54:01.296402] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:36:14.742 [2024-11-19 09:54:01.296410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.742 request: 00:36:14.742 { 00:36:14.742 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:36:14.742 "namespace": { 00:36:14.742 "bdev_name": "Malloc0", 00:36:14.742 "no_auto_visible": false 00:36:14.742 }, 00:36:14.742 "method": "nvmf_subsystem_add_ns", 00:36:14.742 "req_id": 1 00:36:14.742 } 00:36:14.742 Got JSON-RPC error response 00:36:14.742 response: 00:36:14.742 { 00:36:14.742 "code": -32602, 00:36:14.742 "message": "Invalid parameters" 00:36:14.742 } 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:36:14.742 Adding namespace failed - expected result. 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:36:14.742 test case2: host connect to nvmf target in multiple paths 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:14.742 [2024-11-19 09:54:01.308498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.742 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:15.002 09:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:36:15.573 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:36:15.573 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:36:15.573 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:15.573 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:15.573 09:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:36:17.490 09:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:17.490 09:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:17.490 09:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:17.490 09:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:17.490 09:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:17.490 09:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:36:17.490 09:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:17.490 [global] 00:36:17.490 thread=1 00:36:17.490 invalidate=1 00:36:17.490 rw=write 00:36:17.490 time_based=1 00:36:17.490 runtime=1 00:36:17.490 ioengine=libaio 00:36:17.490 direct=1 00:36:17.490 bs=4096 00:36:17.490 iodepth=1 00:36:17.490 norandommap=0 00:36:17.490 numjobs=1 00:36:17.490 00:36:17.490 verify_dump=1 00:36:17.490 verify_backlog=512 00:36:17.490 verify_state_save=0 00:36:17.490 do_verify=1 00:36:17.490 verify=crc32c-intel 00:36:17.490 [job0] 00:36:17.490 filename=/dev/nvme0n1 00:36:17.490 Could not set queue depth (nvme0n1) 00:36:18.060 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:18.061 fio-3.35 00:36:18.061 Starting 1 thread 00:36:19.004 00:36:19.004 job0: (groupid=0, jobs=1): err= 0: pid=604964: Tue Nov 19 09:54:05 2024 00:36:19.004 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:19.004 slat (nsec): min=7182, max=67177, avg=25436.46, stdev=3505.07 00:36:19.004 clat (usec): min=719, max=1185, avg=955.09, stdev=58.20 00:36:19.004 lat (usec): min=745, max=1211, avg=980.53, stdev=58.69 00:36:19.004 clat percentiles (usec): 00:36:19.004 | 1.00th=[ 758], 5.00th=[ 824], 10.00th=[ 898], 20.00th=[ 930], 00:36:19.004 | 30.00th=[ 947], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 971], 00:36:19.004 | 70.00th=[ 979], 80.00th=[ 996], 90.00th=[ 1012], 95.00th=[ 1029], 00:36:19.004 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 1188], 99.95th=[ 1188], 00:36:19.004 | 99.99th=[ 1188] 00:36:19.004 write: IOPS=827, BW=3309KiB/s (3388kB/s)(3312KiB/1001msec); 0 zone resets 00:36:19.004 slat (nsec): min=9083, max=66853, avg=28832.50, stdev=10038.55 00:36:19.004 clat (usec): min=221, max=814, avg=560.83, stdev=97.83 00:36:19.004 lat (usec): min=232, max=847, avg=589.66, stdev=102.74 00:36:19.004 clat percentiles (usec): 00:36:19.004 | 1.00th=[ 334], 5.00th=[ 392], 10.00th=[ 429], 20.00th=[ 478], 00:36:19.004 | 30.00th=[ 510], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 586], 00:36:19.004 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 693], 95.00th=[ 717], 00:36:19.004 | 99.00th=[ 750], 99.50th=[ 758], 99.90th=[ 816], 99.95th=[ 816], 00:36:19.004 | 99.99th=[ 816] 00:36:19.004 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:36:19.004 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:19.004 lat (usec) : 250=0.15%, 500=16.79%, 750=44.55%, 1000=32.54% 00:36:19.004 lat (msec) : 2=5.97% 00:36:19.005 cpu : usr=3.90%, sys=3.70%, ctx=1341, majf=0, minf=1 00:36:19.005 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:19.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.005 issued rwts: total=512,828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:19.005 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:19.005 00:36:19.005 Run status group 0 (all jobs): 00:36:19.005 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:36:19.005 WRITE: bw=3309KiB/s (3388kB/s), 3309KiB/s-3309KiB/s (3388kB/s-3388kB/s), io=3312KiB (3391kB), run=1001-1001msec 00:36:19.005 00:36:19.005 Disk stats (read/write): 00:36:19.005 nvme0n1: ios=562/658, merge=0/0, ticks=558/279, in_queue=837, util=93.49% 00:36:19.005 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:19.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:19.265 rmmod nvme_tcp 00:36:19.265 rmmod nvme_fabrics 00:36:19.265 rmmod nvme_keyring 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 603993 ']' 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 603993 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 603993 ']' 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 603993 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:19.265 09:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 603993 00:36:19.265 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:19.265 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:19.265 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 603993' 00:36:19.265 killing process with pid 603993 00:36:19.265 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 603993 00:36:19.265 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 603993 00:36:19.526 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:19.526 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:19.526 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:19.526 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:36:19.526 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:36:19.526 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:19.526 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:36:19.526 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:19.526 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:19.526 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:19.526 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:19.526 09:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:22.070 00:36:22.070 real 0m15.374s 00:36:22.070 user 0m36.326s 00:36:22.070 sys 0m7.216s 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:22.070 ************************************ 00:36:22.070 END TEST nvmf_nmic 00:36:22.070 ************************************ 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:22.070 ************************************ 00:36:22.070 START TEST nvmf_fio_target 00:36:22.070 ************************************ 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:22.070 * Looking for test storage... 00:36:22.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:22.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.070 --rc genhtml_branch_coverage=1 00:36:22.070 --rc genhtml_function_coverage=1 00:36:22.070 --rc genhtml_legend=1 00:36:22.070 --rc geninfo_all_blocks=1 00:36:22.070 --rc geninfo_unexecuted_blocks=1 00:36:22.070 00:36:22.070 ' 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:22.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.070 --rc genhtml_branch_coverage=1 00:36:22.070 --rc genhtml_function_coverage=1 00:36:22.070 --rc genhtml_legend=1 00:36:22.070 --rc geninfo_all_blocks=1 00:36:22.070 --rc geninfo_unexecuted_blocks=1 00:36:22.070 00:36:22.070 ' 00:36:22.070 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:22.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.070 --rc genhtml_branch_coverage=1 00:36:22.070 --rc genhtml_function_coverage=1 00:36:22.070 --rc genhtml_legend=1 00:36:22.070 --rc geninfo_all_blocks=1 00:36:22.070 --rc geninfo_unexecuted_blocks=1 00:36:22.070 00:36:22.070 ' 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:22.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.071 --rc genhtml_branch_coverage=1 00:36:22.071 --rc genhtml_function_coverage=1 00:36:22.071 --rc genhtml_legend=1 00:36:22.071 --rc geninfo_all_blocks=1 00:36:22.071 --rc geninfo_unexecuted_blocks=1 00:36:22.071 00:36:22.071 ' 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:22.071 09:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:30.213 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:30.213 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:30.213 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:30.214 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:30.214 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:30.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:30.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:36:30.214 00:36:30.214 --- 10.0.0.2 ping statistics --- 00:36:30.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:30.214 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:30.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:30.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:36:30.214 00:36:30.214 --- 10.0.0.1 ping statistics --- 00:36:30.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:30.214 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=609774 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 609774 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 609774 ']' 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:30.214 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:30.215 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:30.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:30.215 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:30.215 09:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:30.215 [2024-11-19 09:54:16.022340] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:30.215 [2024-11-19 09:54:16.023441] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:30.215 [2024-11-19 09:54:16.023490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:30.215 [2024-11-19 09:54:16.121886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:30.215 [2024-11-19 09:54:16.175702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:30.215 [2024-11-19 09:54:16.175756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:30.215 [2024-11-19 09:54:16.175765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:30.215 [2024-11-19 09:54:16.175772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:30.215 [2024-11-19 09:54:16.175780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:30.215 [2024-11-19 09:54:16.177827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.215 [2024-11-19 09:54:16.177964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:30.215 [2024-11-19 09:54:16.178125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.215 [2024-11-19 09:54:16.178126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:30.215 [2024-11-19 09:54:16.255294] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:30.215 [2024-11-19 09:54:16.256351] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:30.215 [2024-11-19 09:54:16.256604] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:30.215 [2024-11-19 09:54:16.257056] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:30.215 [2024-11-19 09:54:16.257092] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:30.215 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:30.215 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:36:30.215 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:30.215 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:30.215 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:30.215 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:30.215 09:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:30.477 [2024-11-19 09:54:17.059143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:30.477 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:30.738 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:36:30.738 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:30.999 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:36:30.999 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:30.999 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:36:30.999 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:31.259 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:36:31.259 09:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:36:31.519 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:31.780 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:36:31.780 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:32.041 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:36:32.041 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:32.041 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:36:32.041 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:36:32.302 09:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:32.577 09:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:32.577 09:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:32.577 09:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:32.577 09:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:32.837 09:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:33.097 [2024-11-19 09:54:19.643086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:33.097 09:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:36:33.358 09:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:36:33.358 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:33.931 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:36:33.931 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:36:33.931 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:33.931 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:36:33.931 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:36:33.931 09:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:36:35.841 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:35.841 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:35.841 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:35.841 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:36:35.842 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:35.842 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:36:35.842 09:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:36.102 [global] 00:36:36.102 thread=1 00:36:36.102 invalidate=1 00:36:36.102 rw=write 00:36:36.102 time_based=1 00:36:36.102 runtime=1 00:36:36.102 ioengine=libaio 00:36:36.102 direct=1 00:36:36.102 bs=4096 00:36:36.102 iodepth=1 00:36:36.102 norandommap=0 00:36:36.102 numjobs=1 00:36:36.102 00:36:36.102 verify_dump=1 00:36:36.102 verify_backlog=512 00:36:36.102 verify_state_save=0 00:36:36.102 do_verify=1 00:36:36.102 verify=crc32c-intel 00:36:36.102 [job0] 00:36:36.102 filename=/dev/nvme0n1 00:36:36.102 [job1] 00:36:36.102 filename=/dev/nvme0n2 00:36:36.102 [job2] 00:36:36.102 filename=/dev/nvme0n3 00:36:36.102 [job3] 00:36:36.102 filename=/dev/nvme0n4 00:36:36.102 Could not set queue depth (nvme0n1) 00:36:36.102 Could not set queue depth (nvme0n2) 00:36:36.102 Could not set queue depth (nvme0n3) 00:36:36.102 Could not set queue depth (nvme0n4) 00:36:36.363 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:36.363 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:36.363 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:36.363 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:36.363 fio-3.35 00:36:36.363 Starting 4 threads 00:36:37.748 00:36:37.748 job0: (groupid=0, jobs=1): err= 0: pid=611343: Tue Nov 19 09:54:24 2024 00:36:37.748 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:37.748 slat (nsec): min=7689, max=58724, avg=25901.21, stdev=2661.32 00:36:37.748 clat (usec): min=548, max=1281, avg=1023.78, stdev=105.53 00:36:37.748 lat (usec): min=575, max=1307, avg=1049.68, stdev=105.51 00:36:37.748 clat percentiles (usec): 00:36:37.748 | 1.00th=[ 693], 5.00th=[ 840], 10.00th=[ 889], 20.00th=[ 947], 00:36:37.748 | 30.00th=[ 979], 40.00th=[ 1012], 50.00th=[ 1037], 60.00th=[ 1057], 00:36:37.748 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:36:37.748 | 99.00th=[ 1221], 99.50th=[ 1270], 99.90th=[ 1287], 99.95th=[ 1287], 00:36:37.748 | 99.99th=[ 1287] 00:36:37.748 write: IOPS=678, BW=2713KiB/s (2778kB/s)(2716KiB/1001msec); 0 zone resets 00:36:37.748 slat (nsec): min=10231, max=77961, avg=31838.48, stdev=8186.63 00:36:37.748 clat (usec): min=150, max=1050, avg=635.43, stdev=117.68 00:36:37.748 lat (usec): min=161, max=1083, avg=667.27, stdev=120.52 00:36:37.748 clat percentiles (usec): 00:36:37.748 | 1.00th=[ 355], 5.00th=[ 424], 10.00th=[ 478], 20.00th=[ 537], 00:36:37.748 | 30.00th=[ 586], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 676], 00:36:37.748 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 807], 00:36:37.748 | 99.00th=[ 922], 99.50th=[ 955], 99.90th=[ 1057], 99.95th=[ 1057], 00:36:37.748 | 99.99th=[ 1057] 00:36:37.748 bw ( KiB/s): min= 4096, max= 4096, per=37.65%, avg=4096.00, stdev= 0.00, samples=1 00:36:37.748 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:37.748 lat (usec) : 250=0.17%, 500=7.47%, 750=42.32%, 1000=23.17% 00:36:37.748 lat (msec) : 2=26.87% 00:36:37.748 cpu : usr=1.90%, sys=3.50%, ctx=1195, majf=0, minf=1 00:36:37.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.748 issued rwts: total=512,679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:37.748 job1: (groupid=0, jobs=1): err= 0: pid=611344: Tue Nov 19 09:54:24 2024 00:36:37.748 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:37.748 slat (nsec): min=6773, max=63091, avg=27167.64, stdev=5210.33 00:36:37.748 clat (usec): min=569, max=1299, avg=1018.87, stdev=166.10 00:36:37.748 lat (usec): min=597, max=1326, avg=1046.04, stdev=167.12 00:36:37.748 clat percentiles (usec): 00:36:37.748 | 1.00th=[ 611], 5.00th=[ 717], 10.00th=[ 766], 20.00th=[ 840], 00:36:37.748 | 30.00th=[ 955], 40.00th=[ 1020], 50.00th=[ 1057], 60.00th=[ 1106], 00:36:37.748 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1205], 95.00th=[ 1237], 00:36:37.748 | 99.00th=[ 1287], 99.50th=[ 1287], 99.90th=[ 1303], 99.95th=[ 1303], 00:36:37.748 | 99.99th=[ 1303] 00:36:37.748 write: IOPS=753, BW=3013KiB/s (3085kB/s)(3016KiB/1001msec); 0 zone resets 00:36:37.748 slat (nsec): min=9436, max=70095, avg=32473.57, stdev=9772.07 00:36:37.748 clat (usec): min=157, max=929, avg=570.30, stdev=125.64 00:36:37.748 lat (usec): min=193, max=964, avg=602.77, stdev=129.19 00:36:37.749 clat percentiles (usec): 00:36:37.749 | 1.00th=[ 247], 5.00th=[ 359], 10.00th=[ 408], 20.00th=[ 457], 00:36:37.749 | 30.00th=[ 494], 40.00th=[ 545], 50.00th=[ 586], 60.00th=[ 611], 00:36:37.749 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 734], 95.00th=[ 758], 00:36:37.749 | 99.00th=[ 832], 99.50th=[ 865], 99.90th=[ 930], 99.95th=[ 930], 00:36:37.749 | 99.99th=[ 930] 00:36:37.749 bw ( KiB/s): min= 4096, max= 4096, per=37.65%, avg=4096.00, stdev= 0.00, samples=1 00:36:37.749 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:37.749 lat (usec) : 250=0.63%, 500=17.85%, 750=40.92%, 1000=14.45% 00:36:37.749 lat (msec) : 2=26.15% 00:36:37.749 cpu : usr=2.10%, sys=5.60%, ctx=1267, majf=0, minf=1 00:36:37.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.749 issued rwts: total=512,754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:37.749 job2: (groupid=0, jobs=1): err= 0: pid=611345: Tue Nov 19 09:54:24 2024 00:36:37.749 read: IOPS=18, BW=73.4KiB/s (75.2kB/s)(76.0KiB/1035msec) 00:36:37.749 slat (nsec): min=24969, max=25697, avg=25262.68, stdev=199.93 00:36:37.749 clat (usec): min=895, max=42932, avg=39736.27, stdev=9419.76 00:36:37.749 lat (usec): min=920, max=42957, avg=39761.53, stdev=9419.66 00:36:37.749 clat percentiles (usec): 00:36:37.749 | 1.00th=[ 898], 5.00th=[ 898], 10.00th=[41157], 20.00th=[41157], 00:36:37.749 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:36:37.749 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:36:37.749 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:36:37.749 | 99.99th=[42730] 00:36:37.749 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:36:37.749 slat (nsec): min=9678, max=53576, avg=29977.96, stdev=9212.69 00:36:37.749 clat (usec): min=128, max=839, avg=509.21, stdev=128.81 00:36:37.749 lat (usec): min=140, max=871, avg=539.19, stdev=131.27 00:36:37.749 clat percentiles (usec): 00:36:37.749 | 1.00th=[ 176], 5.00th=[ 281], 10.00th=[ 322], 20.00th=[ 400], 00:36:37.749 | 30.00th=[ 449], 40.00th=[ 494], 50.00th=[ 523], 60.00th=[ 545], 00:36:37.749 | 70.00th=[ 578], 80.00th=[ 627], 90.00th=[ 668], 95.00th=[ 701], 00:36:37.749 | 99.00th=[ 758], 99.50th=[ 799], 99.90th=[ 840], 99.95th=[ 840], 00:36:37.749 | 99.99th=[ 840] 00:36:37.749 bw ( KiB/s): min= 4096, max= 4096, per=37.65%, avg=4096.00, stdev= 0.00, samples=1 00:36:37.749 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:37.749 lat (usec) : 250=2.82%, 500=37.85%, 750=54.24%, 1000=1.69% 00:36:37.749 lat (msec) : 50=3.39% 00:36:37.749 cpu : usr=0.87%, sys=1.26%, ctx=531, majf=0, minf=2 00:36:37.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.749 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:37.749 job3: (groupid=0, jobs=1): err= 0: pid=611346: Tue Nov 19 09:54:24 2024 00:36:37.749 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:37.749 slat (nsec): min=8074, max=46349, avg=28110.54, stdev=2866.42 00:36:37.749 clat (usec): min=599, max=1251, avg=1007.82, stdev=104.45 00:36:37.749 lat (usec): min=627, max=1279, avg=1035.93, stdev=104.61 00:36:37.749 clat percentiles (usec): 00:36:37.749 | 1.00th=[ 668], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 938], 00:36:37.749 | 30.00th=[ 971], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1045], 00:36:37.749 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:36:37.749 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1254], 99.95th=[ 1254], 00:36:37.749 | 99.99th=[ 1254] 00:36:37.749 write: IOPS=869, BW=3477KiB/s (3560kB/s)(3480KiB/1001msec); 0 zone resets 00:36:37.749 slat (nsec): min=9765, max=55769, avg=31795.47, stdev=10493.18 00:36:37.749 clat (usec): min=131, max=859, avg=496.15, stdev=119.55 00:36:37.749 lat (usec): min=143, max=895, avg=527.95, stdev=123.31 00:36:37.749 clat percentiles (usec): 00:36:37.749 | 1.00th=[ 269], 5.00th=[ 318], 10.00th=[ 347], 20.00th=[ 396], 00:36:37.749 | 30.00th=[ 437], 40.00th=[ 457], 50.00th=[ 482], 60.00th=[ 515], 00:36:37.749 | 70.00th=[ 545], 80.00th=[ 594], 90.00th=[ 676], 95.00th=[ 717], 00:36:37.749 | 99.00th=[ 766], 99.50th=[ 783], 99.90th=[ 857], 99.95th=[ 857], 00:36:37.749 | 99.99th=[ 857] 00:36:37.749 bw ( KiB/s): min= 4096, max= 4096, per=37.65%, avg=4096.00, stdev= 0.00, samples=1 00:36:37.749 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:37.749 lat (usec) : 250=0.36%, 500=35.02%, 750=26.99%, 1000=15.48% 00:36:37.749 lat (msec) : 2=22.14% 00:36:37.749 cpu : usr=2.80%, sys=5.60%, ctx=1383, majf=0, minf=1 00:36:37.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.749 issued rwts: total=512,870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:37.749 00:36:37.749 Run status group 0 (all jobs): 00:36:37.749 READ: bw=6010KiB/s (6154kB/s), 73.4KiB/s-2046KiB/s (75.2kB/s-2095kB/s), io=6220KiB (6369kB), run=1001-1035msec 00:36:37.749 WRITE: bw=10.6MiB/s (11.1MB/s), 1979KiB/s-3477KiB/s (2026kB/s-3560kB/s), io=11.0MiB (11.5MB), run=1001-1035msec 00:36:37.749 00:36:37.749 Disk stats (read/write): 00:36:37.749 nvme0n1: ios=487/512, merge=0/0, ticks=1438/320, in_queue=1758, util=96.59% 00:36:37.749 nvme0n2: ios=524/512, merge=0/0, ticks=1446/232, in_queue=1678, util=96.73% 00:36:37.749 nvme0n3: ios=14/512, merge=0/0, ticks=546/235, in_queue=781, util=88.37% 00:36:37.749 nvme0n4: ios=535/607, merge=0/0, ticks=1428/244, in_queue=1672, util=96.57% 00:36:37.749 09:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:36:37.749 [global] 00:36:37.749 thread=1 00:36:37.749 invalidate=1 00:36:37.749 rw=randwrite 00:36:37.749 time_based=1 00:36:37.749 runtime=1 00:36:37.749 ioengine=libaio 00:36:37.749 direct=1 00:36:37.749 bs=4096 00:36:37.749 iodepth=1 00:36:37.749 norandommap=0 00:36:37.749 numjobs=1 00:36:37.749 00:36:37.749 verify_dump=1 00:36:37.749 verify_backlog=512 00:36:37.749 verify_state_save=0 00:36:37.749 do_verify=1 00:36:37.749 verify=crc32c-intel 00:36:37.749 [job0] 00:36:37.749 filename=/dev/nvme0n1 00:36:37.749 [job1] 00:36:37.749 filename=/dev/nvme0n2 00:36:37.749 [job2] 00:36:37.749 filename=/dev/nvme0n3 00:36:37.749 [job3] 00:36:37.749 filename=/dev/nvme0n4 00:36:37.749 Could not set queue depth (nvme0n1) 00:36:37.749 Could not set queue depth (nvme0n2) 00:36:37.749 Could not set queue depth (nvme0n3) 00:36:37.749 Could not set queue depth (nvme0n4) 00:36:38.010 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:38.010 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:38.010 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:38.010 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:38.010 fio-3.35 00:36:38.010 Starting 4 threads 00:36:39.397 00:36:39.397 job0: (groupid=0, jobs=1): err= 0: pid=611861: Tue Nov 19 09:54:25 2024 00:36:39.397 read: IOPS=468, BW=1874KiB/s (1919kB/s)(1876KiB/1001msec) 00:36:39.397 slat (nsec): min=6986, max=42986, avg=23893.49, stdev=5387.99 00:36:39.397 clat (usec): min=484, max=41898, avg=1333.09, stdev=3239.00 00:36:39.397 lat (usec): min=509, max=41924, avg=1356.98, stdev=3238.73 00:36:39.397 clat percentiles (usec): 00:36:39.397 | 1.00th=[ 693], 5.00th=[ 840], 10.00th=[ 914], 20.00th=[ 996], 00:36:39.397 | 30.00th=[ 1029], 40.00th=[ 1045], 50.00th=[ 1074], 60.00th=[ 1090], 00:36:39.397 | 70.00th=[ 1123], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1237], 00:36:39.397 | 99.00th=[ 1418], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:36:39.397 | 99.99th=[41681] 00:36:39.397 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:36:39.397 slat (nsec): min=8759, max=66113, avg=27487.96, stdev=9192.32 00:36:39.397 clat (usec): min=156, max=1018, avg=667.76, stdev=136.49 00:36:39.397 lat (usec): min=165, max=1061, avg=695.25, stdev=138.65 00:36:39.397 clat percentiles (usec): 00:36:39.397 | 1.00th=[ 326], 5.00th=[ 441], 10.00th=[ 494], 20.00th=[ 562], 00:36:39.397 | 30.00th=[ 594], 40.00th=[ 635], 50.00th=[ 668], 60.00th=[ 701], 00:36:39.397 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 840], 95.00th=[ 889], 00:36:39.397 | 99.00th=[ 963], 99.50th=[ 1004], 99.90th=[ 1020], 99.95th=[ 1020], 00:36:39.397 | 99.99th=[ 1020] 00:36:39.397 bw ( KiB/s): min= 4096, max= 4096, per=45.56%, avg=4096.00, stdev= 0.00, samples=1 00:36:39.397 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:39.397 lat (usec) : 250=0.10%, 500=5.91%, 750=32.52%, 1000=23.24% 00:36:39.397 lat (msec) : 2=37.82%, 10=0.10%, 50=0.31% 00:36:39.397 cpu : usr=1.70%, sys=2.70%, ctx=981, majf=0, minf=1 00:36:39.397 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:39.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.397 issued rwts: total=469,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.397 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:39.397 job1: (groupid=0, jobs=1): err= 0: pid=611862: Tue Nov 19 09:54:25 2024 00:36:39.397 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:39.397 slat (nsec): min=7287, max=45630, avg=26636.65, stdev=4028.04 00:36:39.397 clat (usec): min=759, max=41080, avg=1083.69, stdev=1773.21 00:36:39.397 lat (usec): min=769, max=41106, avg=1110.32, stdev=1773.25 00:36:39.397 clat percentiles (usec): 00:36:39.397 | 1.00th=[ 799], 5.00th=[ 873], 10.00th=[ 906], 20.00th=[ 947], 00:36:39.397 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 1004], 60.00th=[ 1020], 00:36:39.397 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1156], 00:36:39.397 | 99.00th=[ 1254], 99.50th=[ 1336], 99.90th=[41157], 99.95th=[41157], 00:36:39.397 | 99.99th=[41157] 00:36:39.397 write: IOPS=656, BW=2625KiB/s (2688kB/s)(2628KiB/1001msec); 0 zone resets 00:36:39.397 slat (nsec): min=9003, max=64181, avg=30225.65, stdev=9904.70 00:36:39.397 clat (usec): min=201, max=2481, avg=612.39, stdev=140.75 00:36:39.397 lat (usec): min=212, max=2519, avg=642.62, stdev=143.59 00:36:39.397 clat percentiles (usec): 00:36:39.397 | 1.00th=[ 355], 5.00th=[ 383], 10.00th=[ 449], 20.00th=[ 498], 00:36:39.397 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:36:39.397 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 791], 00:36:39.397 | 99.00th=[ 873], 99.50th=[ 914], 99.90th=[ 2474], 99.95th=[ 2474], 00:36:39.397 | 99.99th=[ 2474] 00:36:39.397 bw ( KiB/s): min= 4096, max= 4096, per=45.56%, avg=4096.00, stdev= 0.00, samples=1 00:36:39.397 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:39.397 lat (usec) : 250=0.09%, 500=11.29%, 750=39.01%, 1000=27.20% 00:36:39.397 lat (msec) : 2=22.24%, 4=0.09%, 50=0.09% 00:36:39.397 cpu : usr=1.80%, sys=5.00%, ctx=1170, majf=0, minf=1 00:36:39.397 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:39.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.397 issued rwts: total=512,657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.397 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:39.397 job2: (groupid=0, jobs=1): err= 0: pid=611863: Tue Nov 19 09:54:25 2024 00:36:39.397 read: IOPS=17, BW=69.1KiB/s (70.8kB/s)(72.0KiB/1042msec) 00:36:39.397 slat (nsec): min=25867, max=27179, avg=26382.61, stdev=319.93 00:36:39.397 clat (usec): min=1160, max=42019, avg=39467.21, stdev=9568.79 00:36:39.397 lat (usec): min=1186, max=42045, avg=39493.59, stdev=9568.82 00:36:39.397 clat percentiles (usec): 00:36:39.397 | 1.00th=[ 1156], 5.00th=[ 1156], 10.00th=[41157], 20.00th=[41157], 00:36:39.397 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:36:39.397 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:39.397 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:39.397 | 99.99th=[42206] 00:36:39.397 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:36:39.397 slat (nsec): min=8968, max=62249, avg=29513.97, stdev=9090.57 00:36:39.397 clat (usec): min=199, max=920, avg=609.86, stdev=124.43 00:36:39.397 lat (usec): min=231, max=952, avg=639.37, stdev=127.94 00:36:39.397 clat percentiles (usec): 00:36:39.397 | 1.00th=[ 302], 5.00th=[ 379], 10.00th=[ 441], 20.00th=[ 498], 00:36:39.397 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:36:39.397 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 791], 00:36:39.397 | 99.00th=[ 848], 99.50th=[ 873], 99.90th=[ 922], 99.95th=[ 922], 00:36:39.397 | 99.99th=[ 922] 00:36:39.397 bw ( KiB/s): min= 4096, max= 4096, per=45.56%, avg=4096.00, stdev= 0.00, samples=1 00:36:39.397 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:39.397 lat (usec) : 250=0.19%, 500=19.25%, 750=65.66%, 1000=11.51% 00:36:39.397 lat (msec) : 2=0.19%, 50=3.21% 00:36:39.397 cpu : usr=0.77%, sys=2.11%, ctx=530, majf=0, minf=1 00:36:39.397 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:39.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.397 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.397 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:39.397 job3: (groupid=0, jobs=1): err= 0: pid=611866: Tue Nov 19 09:54:25 2024 00:36:39.397 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:39.397 slat (nsec): min=25744, max=61713, avg=26936.90, stdev=3069.28 00:36:39.397 clat (usec): min=667, max=1325, avg=1033.11, stdev=118.38 00:36:39.397 lat (usec): min=694, max=1351, avg=1060.05, stdev=118.29 00:36:39.397 clat percentiles (usec): 00:36:39.397 | 1.00th=[ 742], 5.00th=[ 824], 10.00th=[ 881], 20.00th=[ 947], 00:36:39.397 | 30.00th=[ 971], 40.00th=[ 1004], 50.00th=[ 1037], 60.00th=[ 1057], 00:36:39.397 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1188], 95.00th=[ 1237], 00:36:39.397 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1319], 99.95th=[ 1319], 00:36:39.397 | 99.99th=[ 1319] 00:36:39.397 write: IOPS=660, BW=2641KiB/s (2705kB/s)(2644KiB/1001msec); 0 zone resets 00:36:39.397 slat (nsec): min=9900, max=51801, avg=30127.86, stdev=9287.54 00:36:39.397 clat (usec): min=237, max=985, avg=647.82, stdev=125.61 00:36:39.397 lat (usec): min=247, max=1018, avg=677.95, stdev=128.51 00:36:39.397 clat percentiles (usec): 00:36:39.397 | 1.00th=[ 383], 5.00th=[ 420], 10.00th=[ 474], 20.00th=[ 545], 00:36:39.397 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 693], 00:36:39.397 | 70.00th=[ 717], 80.00th=[ 758], 90.00th=[ 807], 95.00th=[ 848], 00:36:39.397 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 988], 99.95th=[ 988], 00:36:39.397 | 99.99th=[ 988] 00:36:39.397 bw ( KiB/s): min= 4096, max= 4096, per=45.56%, avg=4096.00, stdev= 0.00, samples=1 00:36:39.397 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:39.397 lat (usec) : 250=0.09%, 500=7.33%, 750=37.43%, 1000=28.64% 00:36:39.397 lat (msec) : 2=26.51% 00:36:39.397 cpu : usr=1.00%, sys=4.30%, ctx=1176, majf=0, minf=1 00:36:39.397 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:39.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.398 issued rwts: total=512,661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.398 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:39.398 00:36:39.398 Run status group 0 (all jobs): 00:36:39.398 READ: bw=5800KiB/s (5940kB/s), 69.1KiB/s-2046KiB/s (70.8kB/s-2095kB/s), io=6044KiB (6189kB), run=1001-1042msec 00:36:39.398 WRITE: bw=8990KiB/s (9206kB/s), 1965KiB/s-2641KiB/s (2013kB/s-2705kB/s), io=9368KiB (9593kB), run=1001-1042msec 00:36:39.398 00:36:39.398 Disk stats (read/write): 00:36:39.398 nvme0n1: ios=372/512, merge=0/0, ticks=530/328, in_queue=858, util=88.08% 00:36:39.398 nvme0n2: ios=468/512, merge=0/0, ticks=1450/250, in_queue=1700, util=97.15% 00:36:39.398 nvme0n3: ios=13/512, merge=0/0, ticks=502/248, in_queue=750, util=88.40% 00:36:39.398 nvme0n4: ios=508/512, merge=0/0, ticks=1302/331, in_queue=1633, util=97.33% 00:36:39.398 09:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:36:39.398 [global] 00:36:39.398 thread=1 00:36:39.398 invalidate=1 00:36:39.398 rw=write 00:36:39.398 time_based=1 00:36:39.398 runtime=1 00:36:39.398 ioengine=libaio 00:36:39.398 direct=1 00:36:39.398 bs=4096 00:36:39.398 iodepth=128 00:36:39.398 norandommap=0 00:36:39.398 numjobs=1 00:36:39.398 00:36:39.398 verify_dump=1 00:36:39.398 verify_backlog=512 00:36:39.398 verify_state_save=0 00:36:39.398 do_verify=1 00:36:39.398 verify=crc32c-intel 00:36:39.398 [job0] 00:36:39.398 filename=/dev/nvme0n1 00:36:39.398 [job1] 00:36:39.398 filename=/dev/nvme0n2 00:36:39.398 [job2] 00:36:39.398 filename=/dev/nvme0n3 00:36:39.398 [job3] 00:36:39.398 filename=/dev/nvme0n4 00:36:39.398 Could not set queue depth (nvme0n1) 00:36:39.398 Could not set queue depth (nvme0n2) 00:36:39.398 Could not set queue depth (nvme0n3) 00:36:39.398 Could not set queue depth (nvme0n4) 00:36:39.657 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:39.657 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:39.657 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:39.657 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:39.657 fio-3.35 00:36:39.657 Starting 4 threads 00:36:41.045 00:36:41.045 job0: (groupid=0, jobs=1): err= 0: pid=612387: Tue Nov 19 09:54:27 2024 00:36:41.045 read: IOPS=4381, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1007msec) 00:36:41.045 slat (nsec): min=1990, max=15210k, avg=112979.19, stdev=891864.29 00:36:41.045 clat (usec): min=1424, max=33438, avg=15152.03, stdev=4699.04 00:36:41.045 lat (usec): min=3012, max=33445, avg=15265.01, stdev=4769.47 00:36:41.045 clat percentiles (usec): 00:36:41.045 | 1.00th=[ 5473], 5.00th=[ 7242], 10.00th=[ 8848], 20.00th=[11600], 00:36:41.045 | 30.00th=[13042], 40.00th=[14353], 50.00th=[15008], 60.00th=[15795], 00:36:41.045 | 70.00th=[17171], 80.00th=[18220], 90.00th=[20317], 95.00th=[23462], 00:36:41.045 | 99.00th=[29230], 99.50th=[33424], 99.90th=[33424], 99.95th=[33424], 00:36:41.045 | 99.99th=[33424] 00:36:41.045 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:36:41.045 slat (nsec): min=1702, max=13002k, avg=97469.52, stdev=701774.73 00:36:41.045 clat (usec): min=2526, max=38558, avg=13155.73, stdev=6268.57 00:36:41.045 lat (usec): min=2534, max=38568, avg=13253.20, stdev=6315.76 00:36:41.045 clat percentiles (usec): 00:36:41.045 | 1.00th=[ 4015], 5.00th=[ 5080], 10.00th=[ 6587], 20.00th=[ 8356], 00:36:41.045 | 30.00th=[ 9634], 40.00th=[10945], 50.00th=[11994], 60.00th=[13435], 00:36:41.045 | 70.00th=[14484], 80.00th=[16712], 90.00th=[20841], 95.00th=[27395], 00:36:41.045 | 99.00th=[35390], 99.50th=[37487], 99.90th=[38536], 99.95th=[38536], 00:36:41.045 | 99.99th=[38536] 00:36:41.045 bw ( KiB/s): min=16568, max=20296, per=18.95%, avg=18432.00, stdev=2636.09, samples=2 00:36:41.045 iops : min= 4142, max= 5074, avg=4608.00, stdev=659.02, samples=2 00:36:41.045 lat (msec) : 2=0.01%, 4=0.74%, 10=21.76%, 20=66.60%, 50=10.89% 00:36:41.045 cpu : usr=3.38%, sys=5.57%, ctx=257, majf=0, minf=1 00:36:41.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:36:41.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:41.045 issued rwts: total=4412,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:41.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:41.045 job1: (groupid=0, jobs=1): err= 0: pid=612388: Tue Nov 19 09:54:27 2024 00:36:41.045 read: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec) 00:36:41.045 slat (nsec): min=898, max=15371k, avg=79602.80, stdev=662155.06 00:36:41.045 clat (usec): min=2999, max=39731, avg=10157.07, stdev=5323.53 00:36:41.045 lat (usec): min=4048, max=39739, avg=10236.67, stdev=5374.80 00:36:41.045 clat percentiles (usec): 00:36:41.045 | 1.00th=[ 4883], 5.00th=[ 5669], 10.00th=[ 5997], 20.00th=[ 6521], 00:36:41.045 | 30.00th=[ 7111], 40.00th=[ 7439], 50.00th=[ 8291], 60.00th=[ 9765], 00:36:41.045 | 70.00th=[11207], 80.00th=[12780], 90.00th=[16581], 95.00th=[19530], 00:36:41.045 | 99.00th=[36439], 99.50th=[38011], 99.90th=[39060], 99.95th=[39584], 00:36:41.045 | 99.99th=[39584] 00:36:41.045 write: IOPS=6965, BW=27.2MiB/s (28.5MB/s)(27.4MiB/1006msec); 0 zone resets 00:36:41.045 slat (nsec): min=1607, max=9009.3k, avg=62104.12, stdev=452451.51 00:36:41.045 clat (usec): min=1209, max=39699, avg=8564.97, stdev=4047.91 00:36:41.045 lat (usec): min=1219, max=39701, avg=8627.07, stdev=4070.78 00:36:41.045 clat percentiles (usec): 00:36:41.046 | 1.00th=[ 3130], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 5800], 00:36:41.046 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 8291], 00:36:41.046 | 70.00th=[ 9503], 80.00th=[10421], 90.00th=[13435], 95.00th=[15008], 00:36:41.046 | 99.00th=[28443], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:36:41.046 | 99.99th=[39584] 00:36:41.046 bw ( KiB/s): min=26192, max=28848, per=28.29%, avg=27520.00, stdev=1878.08, samples=2 00:36:41.046 iops : min= 6548, max= 7212, avg=6880.00, stdev=469.52, samples=2 00:36:41.046 lat (msec) : 2=0.21%, 4=0.91%, 10=68.09%, 20=27.49%, 50=3.29% 00:36:41.046 cpu : usr=5.17%, sys=6.07%, ctx=463, majf=0, minf=2 00:36:41.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:36:41.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:41.046 issued rwts: total=6656,7007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:41.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:41.046 job2: (groupid=0, jobs=1): err= 0: pid=612389: Tue Nov 19 09:54:27 2024 00:36:41.046 read: IOPS=4988, BW=19.5MiB/s (20.4MB/s)(20.4MiB/1046msec) 00:36:41.046 slat (nsec): min=911, max=22289k, avg=91910.84, stdev=725369.20 00:36:41.046 clat (usec): min=4575, max=56756, avg=12394.32, stdev=8003.55 00:36:41.046 lat (usec): min=5010, max=56763, avg=12486.23, stdev=8045.16 00:36:41.046 clat percentiles (usec): 00:36:41.046 | 1.00th=[ 5932], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 8225], 00:36:41.046 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[10159], 00:36:41.046 | 70.00th=[13566], 80.00th=[15139], 90.00th=[18220], 95.00th=[22414], 00:36:41.046 | 99.00th=[51643], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:36:41.046 | 99.99th=[56886] 00:36:41.046 write: IOPS=5384, BW=21.0MiB/s (22.1MB/s)(22.0MiB/1046msec); 0 zone resets 00:36:41.046 slat (nsec): min=1578, max=21095k, avg=87687.34, stdev=696537.64 00:36:41.046 clat (usec): min=1212, max=57092, avg=12075.68, stdev=6832.38 00:36:41.046 lat (usec): min=1225, max=62426, avg=12163.37, stdev=6886.32 00:36:41.046 clat percentiles (usec): 00:36:41.046 | 1.00th=[ 5604], 5.00th=[ 6980], 10.00th=[ 7767], 20.00th=[ 8160], 00:36:41.046 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[11863], 00:36:41.046 | 70.00th=[13829], 80.00th=[14353], 90.00th=[15270], 95.00th=[25822], 00:36:41.046 | 99.00th=[40633], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:36:41.046 | 99.99th=[56886] 00:36:41.046 bw ( KiB/s): min=20480, max=24336, per=23.03%, avg=22408.00, stdev=2726.60, samples=2 00:36:41.046 iops : min= 5120, max= 6084, avg=5602.00, stdev=681.65, samples=2 00:36:41.046 lat (msec) : 2=0.10%, 10=54.45%, 20=38.60%, 50=5.99%, 100=0.86% 00:36:41.046 cpu : usr=3.64%, sys=5.74%, ctx=362, majf=0, minf=2 00:36:41.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:36:41.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:41.046 issued rwts: total=5218,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:41.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:41.046 job3: (groupid=0, jobs=1): err= 0: pid=612390: Tue Nov 19 09:54:27 2024 00:36:41.046 read: IOPS=7981, BW=31.2MiB/s (32.7MB/s)(31.3MiB/1005msec) 00:36:41.046 slat (nsec): min=988, max=6990.4k, avg=63909.03, stdev=486055.83 00:36:41.046 clat (usec): min=2225, max=14769, avg=8297.06, stdev=2054.95 00:36:41.046 lat (usec): min=2230, max=14799, avg=8360.97, stdev=2084.80 00:36:41.046 clat percentiles (usec): 00:36:41.046 | 1.00th=[ 4146], 5.00th=[ 5473], 10.00th=[ 6390], 20.00th=[ 6915], 00:36:41.046 | 30.00th=[ 7242], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 8029], 00:36:41.046 | 70.00th=[ 8586], 80.00th=[ 9765], 90.00th=[11600], 95.00th=[12780], 00:36:41.046 | 99.00th=[13829], 99.50th=[14091], 99.90th=[14484], 99.95th=[14615], 00:36:41.046 | 99.99th=[14746] 00:36:41.046 write: IOPS=8151, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1005msec); 0 zone resets 00:36:41.046 slat (nsec): min=1696, max=19302k, avg=54630.10, stdev=402807.54 00:36:41.046 clat (usec): min=1195, max=20663, avg=7419.47, stdev=2471.37 00:36:41.046 lat (usec): min=1253, max=24647, avg=7474.10, stdev=2481.58 00:36:41.046 clat percentiles (usec): 00:36:41.046 | 1.00th=[ 2999], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5538], 00:36:41.046 | 30.00th=[ 6390], 40.00th=[ 7242], 50.00th=[ 7635], 60.00th=[ 7832], 00:36:41.046 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[10683], 00:36:41.046 | 99.00th=[20579], 99.50th=[20579], 99.90th=[20579], 99.95th=[20579], 00:36:41.046 | 99.99th=[20579] 00:36:41.046 bw ( KiB/s): min=32768, max=32768, per=33.68%, avg=32768.00, stdev= 0.00, samples=2 00:36:41.046 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:36:41.046 lat (msec) : 2=0.08%, 4=1.94%, 10=84.46%, 20=12.73%, 50=0.78% 00:36:41.046 cpu : usr=5.88%, sys=7.27%, ctx=663, majf=0, minf=1 00:36:41.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:36:41.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:41.046 issued rwts: total=8021,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:41.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:41.046 00:36:41.046 Run status group 0 (all jobs): 00:36:41.046 READ: bw=90.8MiB/s (95.2MB/s), 17.1MiB/s-31.2MiB/s (17.9MB/s-32.7MB/s), io=94.9MiB (99.6MB), run=1005-1046msec 00:36:41.046 WRITE: bw=95.0MiB/s (99.6MB/s), 17.9MiB/s-31.8MiB/s (18.7MB/s-33.4MB/s), io=99.4MiB (104MB), run=1005-1046msec 00:36:41.046 00:36:41.046 Disk stats (read/write): 00:36:41.046 nvme0n1: ios=3615/4039, merge=0/0, ticks=44081/43719, in_queue=87800, util=96.49% 00:36:41.046 nvme0n2: ios=5151/5390, merge=0/0, ticks=54086/46958, in_queue=101044, util=90.81% 00:36:41.046 nvme0n3: ios=4655/4724, merge=0/0, ticks=26379/25608, in_queue=51987, util=92.08% 00:36:41.046 nvme0n4: ios=6697/6826, merge=0/0, ticks=52900/46577, in_queue=99477, util=97.97% 00:36:41.046 09:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:36:41.046 [global] 00:36:41.046 thread=1 00:36:41.046 invalidate=1 00:36:41.046 rw=randwrite 00:36:41.046 time_based=1 00:36:41.046 runtime=1 00:36:41.046 ioengine=libaio 00:36:41.046 direct=1 00:36:41.046 bs=4096 00:36:41.046 iodepth=128 00:36:41.046 norandommap=0 00:36:41.046 numjobs=1 00:36:41.046 00:36:41.046 verify_dump=1 00:36:41.046 verify_backlog=512 00:36:41.046 verify_state_save=0 00:36:41.046 do_verify=1 00:36:41.046 verify=crc32c-intel 00:36:41.046 [job0] 00:36:41.046 filename=/dev/nvme0n1 00:36:41.046 [job1] 00:36:41.046 filename=/dev/nvme0n2 00:36:41.046 [job2] 00:36:41.046 filename=/dev/nvme0n3 00:36:41.046 [job3] 00:36:41.046 filename=/dev/nvme0n4 00:36:41.046 Could not set queue depth (nvme0n1) 00:36:41.046 Could not set queue depth (nvme0n2) 00:36:41.046 Could not set queue depth (nvme0n3) 00:36:41.046 Could not set queue depth (nvme0n4) 00:36:41.615 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:41.615 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:41.615 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:41.615 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:41.615 fio-3.35 00:36:41.615 Starting 4 threads 00:36:42.560 00:36:42.560 job0: (groupid=0, jobs=1): err= 0: pid=612915: Tue Nov 19 09:54:29 2024 00:36:42.560 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:36:42.560 slat (nsec): min=916, max=17002k, avg=99991.16, stdev=701788.25 00:36:42.560 clat (usec): min=2817, max=36988, avg=13625.01, stdev=5424.94 00:36:42.560 lat (usec): min=2819, max=37017, avg=13725.00, stdev=5471.62 00:36:42.560 clat percentiles (usec): 00:36:42.560 | 1.00th=[ 4490], 5.00th=[ 5342], 10.00th=[ 8160], 20.00th=[10028], 00:36:42.560 | 30.00th=[10552], 40.00th=[11863], 50.00th=[12780], 60.00th=[13566], 00:36:42.560 | 70.00th=[14746], 80.00th=[16909], 90.00th=[20579], 95.00th=[26870], 00:36:42.560 | 99.00th=[30540], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:36:42.560 | 99.99th=[36963] 00:36:42.560 write: IOPS=4144, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1006msec); 0 zone resets 00:36:42.560 slat (nsec): min=1564, max=21945k, avg=134739.42, stdev=826780.62 00:36:42.560 clat (usec): min=485, max=53522, avg=17119.09, stdev=10901.13 00:36:42.560 lat (usec): min=493, max=53532, avg=17253.83, stdev=10967.81 00:36:42.560 clat percentiles (usec): 00:36:42.560 | 1.00th=[ 1647], 5.00th=[ 4047], 10.00th=[ 6849], 20.00th=[ 8717], 00:36:42.560 | 30.00th=[ 9896], 40.00th=[11600], 50.00th=[12780], 60.00th=[16188], 00:36:42.560 | 70.00th=[20841], 80.00th=[26346], 90.00th=[31589], 95.00th=[40633], 00:36:42.560 | 99.00th=[50594], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:36:42.560 | 99.99th=[53740] 00:36:42.560 bw ( KiB/s): min=16368, max=16400, per=21.86%, avg=16384.00, stdev=22.63, samples=2 00:36:42.560 iops : min= 4092, max= 4100, avg=4096.00, stdev= 5.66, samples=2 00:36:42.560 lat (usec) : 500=0.04% 00:36:42.560 lat (msec) : 2=0.83%, 4=1.50%, 10=23.36%, 20=53.30%, 50=20.41% 00:36:42.560 lat (msec) : 100=0.56% 00:36:42.560 cpu : usr=3.28%, sys=4.48%, ctx=352, majf=0, minf=1 00:36:42.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:36:42.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:42.560 issued rwts: total=4096,4169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:42.560 job1: (groupid=0, jobs=1): err= 0: pid=612916: Tue Nov 19 09:54:29 2024 00:36:42.560 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:36:42.560 slat (nsec): min=950, max=21068k, avg=89130.33, stdev=732590.71 00:36:42.560 clat (usec): min=3558, max=42120, avg=11498.25, stdev=4639.44 00:36:42.560 lat (usec): min=3560, max=42146, avg=11587.38, stdev=4710.46 00:36:42.560 clat percentiles (usec): 00:36:42.560 | 1.00th=[ 4621], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 8455], 00:36:42.560 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[11076], 00:36:42.560 | 70.00th=[11731], 80.00th=[13435], 90.00th=[17171], 95.00th=[21103], 00:36:42.560 | 99.00th=[30278], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:36:42.560 | 99.99th=[42206] 00:36:42.560 write: IOPS=4940, BW=19.3MiB/s (20.2MB/s)(19.4MiB/1007msec); 0 zone resets 00:36:42.560 slat (nsec): min=1641, max=18227k, avg=111835.89, stdev=690785.81 00:36:42.560 clat (usec): min=1931, max=65569, avg=14784.56, stdev=12661.08 00:36:42.560 lat (usec): min=4225, max=65578, avg=14896.40, stdev=12745.93 00:36:42.560 clat percentiles (usec): 00:36:42.560 | 1.00th=[ 4359], 5.00th=[ 4948], 10.00th=[ 5669], 20.00th=[ 6063], 00:36:42.560 | 30.00th=[ 7701], 40.00th=[ 8717], 50.00th=[10552], 60.00th=[12780], 00:36:42.560 | 70.00th=[13698], 80.00th=[17957], 90.00th=[33162], 95.00th=[43779], 00:36:42.560 | 99.00th=[63701], 99.50th=[63701], 99.90th=[65799], 99.95th=[65799], 00:36:42.560 | 99.99th=[65799] 00:36:42.560 bw ( KiB/s): min=18248, max=20536, per=25.87%, avg=19392.00, stdev=1617.86, samples=2 00:36:42.560 iops : min= 4562, max= 5134, avg=4848.00, stdev=404.47, samples=2 00:36:42.560 lat (msec) : 2=0.01%, 4=0.35%, 10=47.03%, 20=39.90%, 50=10.55% 00:36:42.560 lat (msec) : 100=2.15% 00:36:42.560 cpu : usr=3.88%, sys=5.07%, ctx=402, majf=0, minf=1 00:36:42.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:36:42.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:42.560 issued rwts: total=4608,4975,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:42.560 job2: (groupid=0, jobs=1): err= 0: pid=612918: Tue Nov 19 09:54:29 2024 00:36:42.560 read: IOPS=4107, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1003msec) 00:36:42.560 slat (nsec): min=910, max=15242k, avg=113116.66, stdev=720588.97 00:36:42.560 clat (usec): min=1773, max=41735, avg=13861.51, stdev=6121.13 00:36:42.560 lat (usec): min=2140, max=41747, avg=13974.63, stdev=6178.87 00:36:42.560 clat percentiles (usec): 00:36:42.560 | 1.00th=[ 6718], 5.00th=[ 7898], 10.00th=[ 8455], 20.00th=[ 8979], 00:36:42.560 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10683], 60.00th=[15008], 00:36:42.560 | 70.00th=[16712], 80.00th=[18220], 90.00th=[22676], 95.00th=[25297], 00:36:42.560 | 99.00th=[29754], 99.50th=[29754], 99.90th=[41681], 99.95th=[41681], 00:36:42.560 | 99.99th=[41681] 00:36:42.560 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:36:42.560 slat (nsec): min=1489, max=21025k, avg=112892.32, stdev=770709.24 00:36:42.560 clat (usec): min=2702, max=74821, avg=15119.21, stdev=12427.96 00:36:42.560 lat (usec): min=3250, max=74825, avg=15232.10, stdev=12515.92 00:36:42.560 clat percentiles (usec): 00:36:42.560 | 1.00th=[ 4948], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 7701], 00:36:42.560 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 9241], 60.00th=[11076], 00:36:42.560 | 70.00th=[14222], 80.00th=[22414], 90.00th=[35390], 95.00th=[42730], 00:36:42.560 | 99.00th=[61604], 99.50th=[69731], 99.90th=[74974], 99.95th=[74974], 00:36:42.560 | 99.99th=[74974] 00:36:42.560 bw ( KiB/s): min=13152, max=22888, per=24.04%, avg=18020.00, stdev=6884.39, samples=2 00:36:42.560 iops : min= 3288, max= 5722, avg=4505.00, stdev=1721.10, samples=2 00:36:42.560 lat (msec) : 2=0.01%, 4=0.47%, 10=48.11%, 20=32.69%, 50=17.44% 00:36:42.560 lat (msec) : 100=1.28% 00:36:42.560 cpu : usr=1.80%, sys=3.19%, ctx=538, majf=0, minf=1 00:36:42.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:36:42.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:42.560 issued rwts: total=4120,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:42.560 job3: (groupid=0, jobs=1): err= 0: pid=612919: Tue Nov 19 09:54:29 2024 00:36:42.560 read: IOPS=4706, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1006msec) 00:36:42.560 slat (nsec): min=926, max=13273k, avg=85147.92, stdev=601374.21 00:36:42.560 clat (usec): min=2568, max=39924, avg=11209.70, stdev=4375.99 00:36:42.560 lat (usec): min=2757, max=39927, avg=11294.84, stdev=4420.29 00:36:42.560 clat percentiles (usec): 00:36:42.560 | 1.00th=[ 4621], 5.00th=[ 6915], 10.00th=[ 7373], 20.00th=[ 7898], 00:36:42.560 | 30.00th=[ 8586], 40.00th=[10028], 50.00th=[10552], 60.00th=[11207], 00:36:42.560 | 70.00th=[12125], 80.00th=[12780], 90.00th=[16319], 95.00th=[20317], 00:36:42.560 | 99.00th=[27919], 99.50th=[34866], 99.90th=[40109], 99.95th=[40109], 00:36:42.560 | 99.99th=[40109] 00:36:42.560 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:36:42.560 slat (nsec): min=1519, max=9900.6k, avg=104990.92, stdev=591212.06 00:36:42.560 clat (usec): min=1712, max=91561, avg=14561.70, stdev=14003.42 00:36:42.560 lat (usec): min=1720, max=91568, avg=14666.69, stdev=14098.52 00:36:42.560 clat percentiles (usec): 00:36:42.560 | 1.00th=[ 2835], 5.00th=[ 5014], 10.00th=[ 6194], 20.00th=[ 7373], 00:36:42.560 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[10683], 00:36:42.560 | 70.00th=[12387], 80.00th=[16188], 90.00th=[29230], 95.00th=[44303], 00:36:42.560 | 99.00th=[83362], 99.50th=[89654], 99.90th=[91751], 99.95th=[91751], 00:36:42.560 | 99.99th=[91751] 00:36:42.560 bw ( KiB/s): min=12288, max=28664, per=27.31%, avg=20476.00, stdev=11579.58, samples=2 00:36:42.560 iops : min= 3072, max= 7166, avg=5119.00, stdev=2894.90, samples=2 00:36:42.560 lat (msec) : 2=0.14%, 4=0.99%, 10=46.45%, 20=41.25%, 50=9.48% 00:36:42.560 lat (msec) : 100=1.68% 00:36:42.560 cpu : usr=3.38%, sys=5.67%, ctx=423, majf=0, minf=2 00:36:42.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:36:42.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:42.561 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.561 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:42.561 00:36:42.561 Run status group 0 (all jobs): 00:36:42.561 READ: bw=68.1MiB/s (71.4MB/s), 15.9MiB/s-18.4MiB/s (16.7MB/s-19.3MB/s), io=68.6MiB (71.9MB), run=1003-1007msec 00:36:42.561 WRITE: bw=73.2MiB/s (76.8MB/s), 16.2MiB/s-19.9MiB/s (17.0MB/s-20.8MB/s), io=73.7MiB (77.3MB), run=1003-1007msec 00:36:42.561 00:36:42.561 Disk stats (read/write): 00:36:42.561 nvme0n1: ios=3555/3584, merge=0/0, ticks=26865/33731, in_queue=60596, util=96.69% 00:36:42.561 nvme0n2: ios=4147/4239, merge=0/0, ticks=31764/30224, in_queue=61988, util=96.94% 00:36:42.561 nvme0n3: ios=3203/3584, merge=0/0, ticks=14350/23420, in_queue=37770, util=96.20% 00:36:42.561 nvme0n4: ios=3584/3679, merge=0/0, ticks=31729/51138, in_queue=82867, util=89.43% 00:36:42.561 09:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:36:42.820 09:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=613164 00:36:42.820 09:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:36:42.820 09:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:36:42.820 [global] 00:36:42.820 thread=1 00:36:42.820 invalidate=1 00:36:42.820 rw=read 00:36:42.820 time_based=1 00:36:42.820 runtime=10 00:36:42.820 ioengine=libaio 00:36:42.820 direct=1 00:36:42.820 bs=4096 00:36:42.820 iodepth=1 00:36:42.820 norandommap=1 00:36:42.820 numjobs=1 00:36:42.820 00:36:42.820 [job0] 00:36:42.820 filename=/dev/nvme0n1 00:36:42.820 [job1] 00:36:42.820 filename=/dev/nvme0n2 00:36:42.820 [job2] 00:36:42.820 filename=/dev/nvme0n3 00:36:42.820 [job3] 00:36:42.820 filename=/dev/nvme0n4 00:36:42.820 Could not set queue depth (nvme0n1) 00:36:42.820 Could not set queue depth (nvme0n2) 00:36:42.820 Could not set queue depth (nvme0n3) 00:36:42.820 Could not set queue depth (nvme0n4) 00:36:43.081 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:43.081 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:43.081 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:43.081 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:43.081 fio-3.35 00:36:43.081 Starting 4 threads 00:36:45.633 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:36:45.894 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10485760, buflen=4096 00:36:45.894 fio: pid=613438, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:45.894 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:36:46.155 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9162752, buflen=4096 00:36:46.155 fio: pid=613437, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:46.155 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:46.155 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:36:46.155 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=8892416, buflen=4096 00:36:46.155 fio: pid=613435, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:46.155 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:46.155 09:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:36:46.417 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:46.417 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:36:46.417 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=2359296, buflen=4096 00:36:46.417 fio: pid=613436, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:46.417 00:36:46.417 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=613435: Tue Nov 19 09:54:33 2024 00:36:46.417 read: IOPS=731, BW=2923KiB/s (2993kB/s)(8684KiB/2971msec) 00:36:46.417 slat (usec): min=6, max=29708, avg=47.06, stdev=712.30 00:36:46.417 clat (usec): min=482, max=41917, avg=1304.64, stdev=3227.55 00:36:46.417 lat (usec): min=508, max=41943, avg=1351.71, stdev=3303.91 00:36:46.417 clat percentiles (usec): 00:36:46.417 | 1.00th=[ 725], 5.00th=[ 857], 10.00th=[ 914], 20.00th=[ 971], 00:36:46.417 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1074], 00:36:46.417 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1221], 00:36:46.417 | 99.00th=[ 1319], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:36:46.417 | 99.99th=[41681] 00:36:46.417 bw ( KiB/s): min= 1128, max= 3768, per=30.07%, avg=2856.00, stdev=1233.54, samples=5 00:36:46.417 iops : min= 282, max= 942, avg=714.00, stdev=308.39, samples=5 00:36:46.417 lat (usec) : 500=0.05%, 750=1.06%, 1000=28.78% 00:36:46.417 lat (msec) : 2=69.38%, 10=0.05%, 50=0.64% 00:36:46.417 cpu : usr=1.45%, sys=2.76%, ctx=2174, majf=0, minf=2 00:36:46.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:46.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.417 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.417 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:46.417 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=613436: Tue Nov 19 09:54:33 2024 00:36:46.417 read: IOPS=181, BW=725KiB/s (743kB/s)(2304KiB/3177msec) 00:36:46.417 slat (usec): min=6, max=25598, avg=125.50, stdev=1436.31 00:36:46.417 clat (usec): min=759, max=42788, avg=5344.63, stdev=12329.25 00:36:46.417 lat (usec): min=786, max=66926, avg=5470.31, stdev=12696.03 00:36:46.417 clat percentiles (usec): 00:36:46.417 | 1.00th=[ 930], 5.00th=[ 996], 10.00th=[ 1057], 20.00th=[ 1106], 00:36:46.417 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1188], 60.00th=[ 1205], 00:36:46.417 | 70.00th=[ 1237], 80.00th=[ 1270], 90.00th=[41157], 95.00th=[41681], 00:36:46.417 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:36:46.417 | 99.99th=[42730] 00:36:46.417 bw ( KiB/s): min= 95, max= 1728, per=8.02%, avg=762.50, stdev=747.56, samples=6 00:36:46.417 iops : min= 23, max= 432, avg=190.50, stdev=187.02, samples=6 00:36:46.417 lat (usec) : 1000=5.20% 00:36:46.417 lat (msec) : 2=84.23%, 20=0.17%, 50=10.23% 00:36:46.417 cpu : usr=0.19%, sys=0.85%, ctx=580, majf=0, minf=1 00:36:46.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:46.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.417 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.417 issued rwts: total=577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:46.417 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=613437: Tue Nov 19 09:54:33 2024 00:36:46.417 read: IOPS=804, BW=3218KiB/s (3295kB/s)(8948KiB/2781msec) 00:36:46.417 slat (nsec): min=7222, max=60240, avg=26143.15, stdev=3168.06 00:36:46.417 clat (usec): min=488, max=4383, avg=1200.92, stdev=150.61 00:36:46.417 lat (usec): min=514, max=4410, avg=1227.06, stdev=150.90 00:36:46.417 clat percentiles (usec): 00:36:46.417 | 1.00th=[ 766], 5.00th=[ 971], 10.00th=[ 1057], 20.00th=[ 1123], 00:36:46.417 | 30.00th=[ 1172], 40.00th=[ 1188], 50.00th=[ 1221], 60.00th=[ 1237], 00:36:46.417 | 70.00th=[ 1254], 80.00th=[ 1287], 90.00th=[ 1319], 95.00th=[ 1369], 00:36:46.417 | 99.00th=[ 1418], 99.50th=[ 1450], 99.90th=[ 1713], 99.95th=[ 3687], 00:36:46.417 | 99.99th=[ 4359] 00:36:46.417 bw ( KiB/s): min= 3112, max= 3512, per=34.11%, avg=3240.00, stdev=158.49, samples=5 00:36:46.417 iops : min= 778, max= 878, avg=810.00, stdev=39.62, samples=5 00:36:46.417 lat (usec) : 500=0.09%, 750=0.85%, 1000=5.23% 00:36:46.417 lat (msec) : 2=93.70%, 4=0.04%, 10=0.04% 00:36:46.417 cpu : usr=0.94%, sys=2.52%, ctx=2238, majf=0, minf=2 00:36:46.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:46.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.417 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.417 issued rwts: total=2238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:46.417 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=613438: Tue Nov 19 09:54:33 2024 00:36:46.417 read: IOPS=983, BW=3931KiB/s (4025kB/s)(10.0MiB/2605msec) 00:36:46.417 slat (nsec): min=6550, max=60588, avg=26043.08, stdev=4383.62 00:36:46.417 clat (usec): min=450, max=1498, avg=975.04, stdev=151.25 00:36:46.417 lat (usec): min=457, max=1525, avg=1001.08, stdev=152.29 00:36:46.417 clat percentiles (usec): 00:36:46.417 | 1.00th=[ 578], 5.00th=[ 676], 10.00th=[ 742], 20.00th=[ 840], 00:36:46.417 | 30.00th=[ 922], 40.00th=[ 979], 50.00th=[ 1020], 60.00th=[ 1045], 00:36:46.417 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1156], 00:36:46.417 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1352], 99.95th=[ 1418], 00:36:46.417 | 99.99th=[ 1500] 00:36:46.417 bw ( KiB/s): min= 3656, max= 4448, per=41.94%, avg=3984.00, stdev=406.07, samples=5 00:36:46.417 iops : min= 914, max= 1112, avg=996.00, stdev=101.52, samples=5 00:36:46.417 lat (usec) : 500=0.23%, 750=10.27%, 1000=34.36% 00:36:46.417 lat (msec) : 2=55.10% 00:36:46.417 cpu : usr=1.96%, sys=3.61%, ctx=2561, majf=0, minf=2 00:36:46.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:46.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.417 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.417 issued rwts: total=2561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:46.417 00:36:46.417 Run status group 0 (all jobs): 00:36:46.417 READ: bw=9498KiB/s (9726kB/s), 725KiB/s-3931KiB/s (743kB/s-4025kB/s), io=29.5MiB (30.9MB), run=2605-3177msec 00:36:46.417 00:36:46.417 Disk stats (read/write): 00:36:46.417 nvme0n1: ios=2060/0, merge=0/0, ticks=2547/0, in_queue=2547, util=93.36% 00:36:46.417 nvme0n2: ios=574/0, merge=0/0, ticks=2939/0, in_queue=2939, util=94.27% 00:36:46.417 nvme0n3: ios=2093/0, merge=0/0, ticks=2443/0, in_queue=2443, util=96.03% 00:36:46.417 nvme0n4: ios=2561/0, merge=0/0, ticks=2232/0, in_queue=2232, util=96.39% 00:36:46.678 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:46.678 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:36:46.939 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:46.939 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:36:46.939 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:46.939 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:36:47.200 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:47.200 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:36:47.461 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:36:47.461 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 613164 00:36:47.461 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:36:47.461 09:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:47.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:47.461 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:47.461 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:36:47.461 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:47.461 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:47.461 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:47.461 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:47.461 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:36:47.461 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:36:47.461 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:36:47.461 nvmf hotplug test: fio failed as expected 00:36:47.461 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:47.723 rmmod nvme_tcp 00:36:47.723 rmmod nvme_fabrics 00:36:47.723 rmmod nvme_keyring 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 609774 ']' 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 609774 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 609774 ']' 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 609774 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 609774 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 609774' 00:36:47.723 killing process with pid 609774 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 609774 00:36:47.723 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 609774 00:36:47.985 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:47.985 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:47.985 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:47.985 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:36:47.985 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:36:47.985 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:47.985 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:36:47.985 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:47.985 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:47.986 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:47.986 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:47.986 09:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:49.903 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:49.903 00:36:49.903 real 0m28.330s 00:36:49.903 user 2m23.830s 00:36:49.903 sys 0m12.113s 00:36:49.903 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:49.903 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:49.903 ************************************ 00:36:49.903 END TEST nvmf_fio_target 00:36:49.903 ************************************ 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:50.165 ************************************ 00:36:50.165 START TEST nvmf_bdevio 00:36:50.165 ************************************ 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:50.165 * Looking for test storage... 00:36:50.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:50.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.165 --rc genhtml_branch_coverage=1 00:36:50.165 --rc genhtml_function_coverage=1 00:36:50.165 --rc genhtml_legend=1 00:36:50.165 --rc geninfo_all_blocks=1 00:36:50.165 --rc geninfo_unexecuted_blocks=1 00:36:50.165 00:36:50.165 ' 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:50.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.165 --rc genhtml_branch_coverage=1 00:36:50.165 --rc genhtml_function_coverage=1 00:36:50.165 --rc genhtml_legend=1 00:36:50.165 --rc geninfo_all_blocks=1 00:36:50.165 --rc geninfo_unexecuted_blocks=1 00:36:50.165 00:36:50.165 ' 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:50.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.165 --rc genhtml_branch_coverage=1 00:36:50.165 --rc genhtml_function_coverage=1 00:36:50.165 --rc genhtml_legend=1 00:36:50.165 --rc geninfo_all_blocks=1 00:36:50.165 --rc geninfo_unexecuted_blocks=1 00:36:50.165 00:36:50.165 ' 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:50.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.165 --rc genhtml_branch_coverage=1 00:36:50.165 --rc genhtml_function_coverage=1 00:36:50.165 --rc genhtml_legend=1 00:36:50.165 --rc geninfo_all_blocks=1 00:36:50.165 --rc geninfo_unexecuted_blocks=1 00:36:50.165 00:36:50.165 ' 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:50.165 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:50.427 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:50.428 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:50.428 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:50.428 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:36:50.428 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:50.428 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:50.428 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:50.428 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:50.428 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:50.428 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:50.428 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:50.428 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:50.428 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:50.428 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:50.428 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:36:50.428 09:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:58.577 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:58.578 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:58.578 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:58.578 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:58.578 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:58.578 09:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:58.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:58.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:36:58.578 00:36:58.578 --- 10.0.0.2 ping statistics --- 00:36:58.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:58.578 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:58.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:58.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:36:58.578 00:36:58.578 --- 10.0.0.1 ping statistics --- 00:36:58.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:58.578 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:58.578 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:58.579 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:36:58.579 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:58.579 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:58.579 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:58.579 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=618458 00:36:58.579 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 618458 00:36:58.579 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:36:58.579 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 618458 ']' 00:36:58.579 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:58.579 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:58.579 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:58.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:58.579 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:58.579 09:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:58.579 [2024-11-19 09:54:44.352809] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:58.579 [2024-11-19 09:54:44.353804] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:58.579 [2024-11-19 09:54:44.353842] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:58.579 [2024-11-19 09:54:44.446683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:58.579 [2024-11-19 09:54:44.483589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:58.579 [2024-11-19 09:54:44.483621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:58.579 [2024-11-19 09:54:44.483630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:58.579 [2024-11-19 09:54:44.483636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:58.579 [2024-11-19 09:54:44.483642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:58.579 [2024-11-19 09:54:44.485374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:58.579 [2024-11-19 09:54:44.485522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:58.579 [2024-11-19 09:54:44.485637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:58.579 [2024-11-19 09:54:44.485638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:58.579 [2024-11-19 09:54:44.541467] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:58.579 [2024-11-19 09:54:44.542715] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:58.579 [2024-11-19 09:54:44.543095] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:58.579 [2024-11-19 09:54:44.543600] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:58.579 [2024-11-19 09:54:44.543648] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:58.579 [2024-11-19 09:54:45.174485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:58.579 Malloc0 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:58.579 [2024-11-19 09:54:45.262779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:58.579 { 00:36:58.579 "params": { 00:36:58.579 "name": "Nvme$subsystem", 00:36:58.579 "trtype": "$TEST_TRANSPORT", 00:36:58.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:58.579 "adrfam": "ipv4", 00:36:58.579 "trsvcid": "$NVMF_PORT", 00:36:58.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:58.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:58.579 "hdgst": ${hdgst:-false}, 00:36:58.579 "ddgst": ${ddgst:-false} 00:36:58.579 }, 00:36:58.579 "method": "bdev_nvme_attach_controller" 00:36:58.579 } 00:36:58.579 EOF 00:36:58.579 )") 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:36:58.579 09:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:58.579 "params": { 00:36:58.579 "name": "Nvme1", 00:36:58.579 "trtype": "tcp", 00:36:58.579 "traddr": "10.0.0.2", 00:36:58.579 "adrfam": "ipv4", 00:36:58.579 "trsvcid": "4420", 00:36:58.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:58.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:58.579 "hdgst": false, 00:36:58.579 "ddgst": false 00:36:58.579 }, 00:36:58.579 "method": "bdev_nvme_attach_controller" 00:36:58.579 }' 00:36:58.579 [2024-11-19 09:54:45.319130] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:58.579 [2024-11-19 09:54:45.319212] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618545 ] 00:36:58.841 [2024-11-19 09:54:45.411929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:58.841 [2024-11-19 09:54:45.468952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:58.841 [2024-11-19 09:54:45.469115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:58.841 [2024-11-19 09:54:45.469115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:59.103 I/O targets: 00:36:59.103 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:36:59.103 00:36:59.103 00:36:59.103 CUnit - A unit testing framework for C - Version 2.1-3 00:36:59.103 http://cunit.sourceforge.net/ 00:36:59.103 00:36:59.103 00:36:59.103 Suite: bdevio tests on: Nvme1n1 00:36:59.103 Test: blockdev write read block ...passed 00:36:59.365 Test: blockdev write zeroes read block ...passed 00:36:59.365 Test: blockdev write zeroes read no split ...passed 00:36:59.365 Test: blockdev write zeroes read split ...passed 00:36:59.365 Test: blockdev write zeroes read split partial ...passed 00:36:59.365 Test: blockdev reset ...[2024-11-19 09:54:45.918740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:36:59.365 [2024-11-19 09:54:45.918842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd0970 (9): Bad file descriptor 00:36:59.365 [2024-11-19 09:54:46.014034] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:36:59.365 passed 00:36:59.365 Test: blockdev write read 8 blocks ...passed 00:36:59.365 Test: blockdev write read size > 128k ...passed 00:36:59.365 Test: blockdev write read invalid size ...passed 00:36:59.365 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:59.365 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:59.365 Test: blockdev write read max offset ...passed 00:36:59.626 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:59.626 Test: blockdev writev readv 8 blocks ...passed 00:36:59.626 Test: blockdev writev readv 30 x 1block ...passed 00:36:59.626 Test: blockdev writev readv block ...passed 00:36:59.626 Test: blockdev writev readv size > 128k ...passed 00:36:59.626 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:59.626 Test: blockdev comparev and writev ...[2024-11-19 09:54:46.240718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:59.626 [2024-11-19 09:54:46.240767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:59.626 [2024-11-19 09:54:46.240784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:59.626 [2024-11-19 09:54:46.240792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.626 [2024-11-19 09:54:46.241425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:59.626 [2024-11-19 09:54:46.241437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:59.626 [2024-11-19 09:54:46.241452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:59.626 [2024-11-19 09:54:46.241459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:59.626 [2024-11-19 09:54:46.242078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:59.626 [2024-11-19 09:54:46.242089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:59.626 [2024-11-19 09:54:46.242103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:59.626 [2024-11-19 09:54:46.242111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:59.626 [2024-11-19 09:54:46.242727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:59.626 [2024-11-19 09:54:46.242739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:59.627 [2024-11-19 09:54:46.242753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:59.627 [2024-11-19 09:54:46.242761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:59.627 passed 00:36:59.627 Test: blockdev nvme passthru rw ...passed 00:36:59.627 Test: blockdev nvme passthru vendor specific ...[2024-11-19 09:54:46.327007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:59.627 [2024-11-19 09:54:46.327020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:59.627 [2024-11-19 09:54:46.327386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:59.627 [2024-11-19 09:54:46.327397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:59.627 [2024-11-19 09:54:46.327732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:59.627 [2024-11-19 09:54:46.327742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:59.627 [2024-11-19 09:54:46.328098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:59.627 [2024-11-19 09:54:46.328109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:59.627 passed 00:36:59.627 Test: blockdev nvme admin passthru ...passed 00:36:59.889 Test: blockdev copy ...passed 00:36:59.889 00:36:59.889 Run Summary: Type Total Ran Passed Failed Inactive 00:36:59.889 suites 1 1 n/a 0 0 00:36:59.889 tests 23 23 23 0 0 00:36:59.889 asserts 152 152 152 0 n/a 00:36:59.889 00:36:59.889 Elapsed time = 1.185 seconds 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:59.889 rmmod nvme_tcp 00:36:59.889 rmmod nvme_fabrics 00:36:59.889 rmmod nvme_keyring 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 618458 ']' 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 618458 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 618458 ']' 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 618458 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:59.889 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 618458 00:37:00.152 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:37:00.152 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:37:00.152 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 618458' 00:37:00.152 killing process with pid 618458 00:37:00.152 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 618458 00:37:00.152 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 618458 00:37:00.152 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:00.152 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:00.152 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:00.152 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:37:00.152 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:37:00.152 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:00.152 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:37:00.152 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:00.152 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:00.152 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:00.152 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:00.152 09:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:02.701 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:02.701 00:37:02.701 real 0m12.167s 00:37:02.702 user 0m10.224s 00:37:02.702 sys 0m6.387s 00:37:02.702 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:02.702 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:02.702 ************************************ 00:37:02.702 END TEST nvmf_bdevio 00:37:02.702 ************************************ 00:37:02.702 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:37:02.702 00:37:02.702 real 5m0.102s 00:37:02.702 user 10m24.762s 00:37:02.702 sys 2m3.274s 00:37:02.702 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:02.702 09:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:02.702 ************************************ 00:37:02.702 END TEST nvmf_target_core_interrupt_mode 00:37:02.702 ************************************ 00:37:02.702 09:54:48 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:37:02.702 09:54:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:02.702 09:54:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:02.702 09:54:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:02.702 ************************************ 00:37:02.702 START TEST nvmf_interrupt 00:37:02.702 ************************************ 00:37:02.702 09:54:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:37:02.702 * Looking for test storage... 00:37:02.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:02.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.702 --rc genhtml_branch_coverage=1 00:37:02.702 --rc genhtml_function_coverage=1 00:37:02.702 --rc genhtml_legend=1 00:37:02.702 --rc geninfo_all_blocks=1 00:37:02.702 --rc geninfo_unexecuted_blocks=1 00:37:02.702 00:37:02.702 ' 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:02.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.702 --rc genhtml_branch_coverage=1 00:37:02.702 --rc genhtml_function_coverage=1 00:37:02.702 --rc genhtml_legend=1 00:37:02.702 --rc geninfo_all_blocks=1 00:37:02.702 --rc geninfo_unexecuted_blocks=1 00:37:02.702 00:37:02.702 ' 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:02.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.702 --rc genhtml_branch_coverage=1 00:37:02.702 --rc genhtml_function_coverage=1 00:37:02.702 --rc genhtml_legend=1 00:37:02.702 --rc geninfo_all_blocks=1 00:37:02.702 --rc geninfo_unexecuted_blocks=1 00:37:02.702 00:37:02.702 ' 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:02.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.702 --rc genhtml_branch_coverage=1 00:37:02.702 --rc genhtml_function_coverage=1 00:37:02.702 --rc genhtml_legend=1 00:37:02.702 --rc geninfo_all_blocks=1 00:37:02.702 --rc geninfo_unexecuted_blocks=1 00:37:02.702 00:37:02.702 ' 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:02.702 09:54:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:37:02.703 09:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:10.849 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:10.849 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:10.849 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:10.850 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:10.850 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:10.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:10.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:37:10.850 00:37:10.850 --- 10.0.0.2 ping statistics --- 00:37:10.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:10.850 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:10.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:10.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:37:10.850 00:37:10.850 --- 10.0.0.1 ping statistics --- 00:37:10.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:10.850 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=623018 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 623018 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 623018 ']' 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:10.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:10.850 09:54:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:10.850 [2024-11-19 09:54:56.551931] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:10.850 [2024-11-19 09:54:56.553073] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:10.850 [2024-11-19 09:54:56.553125] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:10.850 [2024-11-19 09:54:56.651671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:10.850 [2024-11-19 09:54:56.703445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:10.850 [2024-11-19 09:54:56.703500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:10.850 [2024-11-19 09:54:56.703509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:10.850 [2024-11-19 09:54:56.703517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:10.850 [2024-11-19 09:54:56.703523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:10.850 [2024-11-19 09:54:56.707193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:10.850 [2024-11-19 09:54:56.707344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:10.850 [2024-11-19 09:54:56.783835] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:10.850 [2024-11-19 09:54:56.783959] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:10.850 [2024-11-19 09:54:56.784105] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:10.850 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:10.850 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:37:10.850 09:54:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:10.850 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:10.850 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:10.850 09:54:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:10.850 09:54:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:37:10.850 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:37:10.850 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:37:10.850 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:37:10.850 5000+0 records in 00:37:10.850 5000+0 records out 00:37:10.850 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0187204 s, 547 MB/s 00:37:10.850 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:37:10.850 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.850 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:10.850 AIO0 00:37:10.850 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.850 09:54:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:10.851 [2024-11-19 09:54:57.488287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:10.851 [2024-11-19 09:54:57.532926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 623018 0 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 623018 0 idle 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=623018 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 623018 -w 256 00:37:10.851 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 623018 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.31 reactor_0' 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 623018 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.31 reactor_0 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 623018 1 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 623018 1 idle 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=623018 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 623018 -w 256 00:37:11.112 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 623055 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 623055 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=623229 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 623018 0 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 623018 0 busy 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=623018 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 623018 -w 256 00:37:11.373 09:54:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:11.373 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 623018 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.48 reactor_0' 00:37:11.373 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 623018 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.48 reactor_0 00:37:11.373 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:11.373 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:11.373 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:37:11.373 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:37:11.373 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 623018 1 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 623018 1 busy 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=623018 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 623018 -w 256 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 623055 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:00.27 reactor_1' 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 623055 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:00.27 reactor_1 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:11.635 09:54:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 623229 00:37:21.635 Initializing NVMe Controllers 00:37:21.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:21.635 Controller IO queue size 256, less than required. 00:37:21.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:21.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:21.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:21.635 Initialization complete. Launching workers. 00:37:21.635 ======================================================== 00:37:21.635 Latency(us) 00:37:21.635 Device Information : IOPS MiB/s Average min max 00:37:21.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19377.70 75.69 13215.44 4172.89 34085.84 00:37:21.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 20096.20 78.50 12740.31 7679.59 30334.05 00:37:21.635 ======================================================== 00:37:21.635 Total : 39473.90 154.19 12973.55 4172.89 34085.84 00:37:21.635 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 623018 0 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 623018 0 idle 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=623018 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 623018 -w 256 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 623018 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.30 reactor_0' 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 623018 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.30 reactor_0 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:21.635 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:21.636 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:21.636 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:21.636 09:55:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:21.636 09:55:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 623018 1 00:37:21.636 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 623018 1 idle 00:37:21.636 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=623018 00:37:21.636 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:21.636 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:21.636 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:21.636 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:21.636 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:21.636 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:21.636 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:21.636 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:21.636 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:21.636 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 623018 -w 256 00:37:21.636 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:21.896 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 623055 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:37:21.896 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 623055 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:37:21.896 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:21.896 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:21.896 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:21.896 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:21.896 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:21.896 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:21.896 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:21.896 09:55:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:21.896 09:55:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:22.467 09:55:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:37:22.467 09:55:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:37:22.467 09:55:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:37:22.467 09:55:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:37:22.467 09:55:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:37:24.379 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 623018 0 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 623018 0 idle 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=623018 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 623018 -w 256 00:37:24.380 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 623018 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.66 reactor_0' 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 623018 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.66 reactor_0 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 623018 1 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 623018 1 idle 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=623018 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 623018 -w 256 00:37:24.641 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 623055 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.13 reactor_1' 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 623055 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.13 reactor_1 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:24.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:24.901 rmmod nvme_tcp 00:37:24.901 rmmod nvme_fabrics 00:37:24.901 rmmod nvme_keyring 00:37:24.901 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 623018 ']' 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 623018 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 623018 ']' 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 623018 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 623018 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 623018' 00:37:25.162 killing process with pid 623018 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 623018 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 623018 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:25.162 09:55:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:27.708 09:55:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:27.708 00:37:27.708 real 0m24.960s 00:37:27.708 user 0m40.310s 00:37:27.708 sys 0m9.443s 00:37:27.708 09:55:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:27.708 09:55:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:27.708 ************************************ 00:37:27.708 END TEST nvmf_interrupt 00:37:27.708 ************************************ 00:37:27.708 00:37:27.708 real 30m10.926s 00:37:27.708 user 62m3.838s 00:37:27.708 sys 10m9.921s 00:37:27.708 09:55:13 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:27.708 09:55:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:27.708 ************************************ 00:37:27.708 END TEST nvmf_tcp 00:37:27.708 ************************************ 00:37:27.708 09:55:14 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:37:27.708 09:55:14 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:27.708 09:55:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:27.708 09:55:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:27.708 09:55:14 -- common/autotest_common.sh@10 -- # set +x 00:37:27.708 ************************************ 00:37:27.708 START TEST spdkcli_nvmf_tcp 00:37:27.708 ************************************ 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:27.708 * Looking for test storage... 00:37:27.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:27.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:27.708 --rc genhtml_branch_coverage=1 00:37:27.708 --rc genhtml_function_coverage=1 00:37:27.708 --rc genhtml_legend=1 00:37:27.708 --rc geninfo_all_blocks=1 00:37:27.708 --rc geninfo_unexecuted_blocks=1 00:37:27.708 00:37:27.708 ' 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:27.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:27.708 --rc genhtml_branch_coverage=1 00:37:27.708 --rc genhtml_function_coverage=1 00:37:27.708 --rc genhtml_legend=1 00:37:27.708 --rc geninfo_all_blocks=1 00:37:27.708 --rc geninfo_unexecuted_blocks=1 00:37:27.708 00:37:27.708 ' 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:27.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:27.708 --rc genhtml_branch_coverage=1 00:37:27.708 --rc genhtml_function_coverage=1 00:37:27.708 --rc genhtml_legend=1 00:37:27.708 --rc geninfo_all_blocks=1 00:37:27.708 --rc geninfo_unexecuted_blocks=1 00:37:27.708 00:37:27.708 ' 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:27.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:27.708 --rc genhtml_branch_coverage=1 00:37:27.708 --rc genhtml_function_coverage=1 00:37:27.708 --rc genhtml_legend=1 00:37:27.708 --rc geninfo_all_blocks=1 00:37:27.708 --rc geninfo_unexecuted_blocks=1 00:37:27.708 00:37:27.708 ' 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:27.708 09:55:14 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:27.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=626464 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 626464 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 626464 ']' 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:27.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:27.709 09:55:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:27.709 [2024-11-19 09:55:14.353530] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:27.709 [2024-11-19 09:55:14.353601] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626464 ] 00:37:27.709 [2024-11-19 09:55:14.446064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:27.970 [2024-11-19 09:55:14.501488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:27.970 [2024-11-19 09:55:14.501493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:28.543 09:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:28.543 09:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:37:28.543 09:55:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:28.543 09:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:28.543 09:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:28.543 09:55:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:28.543 09:55:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:28.543 09:55:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:28.543 09:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:28.543 09:55:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:28.544 09:55:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:28.544 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:28.544 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:28.544 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:28.544 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:28.544 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:28.544 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:28.544 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:28.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:28.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:28.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:28.544 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:28.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:28.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:28.544 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:28.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:28.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:28.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:28.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:28.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:28.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:28.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:28.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:28.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:28.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:28.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:28.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:28.544 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:28.544 ' 00:37:31.843 [2024-11-19 09:55:17.959214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:32.784 [2024-11-19 09:55:19.319414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:35.327 [2024-11-19 09:55:21.838557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:37.873 [2024-11-19 09:55:24.064924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:39.258 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:39.258 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:39.258 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:39.258 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:39.258 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:39.258 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:39.258 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:39.258 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:39.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:39.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:39.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:39.258 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:39.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:39.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:39.258 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:39.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:39.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:39.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:39.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:39.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:39.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:39.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:39.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:39.258 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:39.259 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:39.259 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:39.259 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:39.259 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:39.259 09:55:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:39.259 09:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:39.259 09:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:39.259 09:55:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:39.259 09:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:39.259 09:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:39.259 09:55:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:39.259 09:55:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:39.831 09:55:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:39.831 09:55:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:39.831 09:55:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:39.831 09:55:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:39.831 09:55:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:39.831 09:55:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:39.831 09:55:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:39.831 09:55:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:39.831 09:55:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:39.831 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:39.831 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:39.831 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:39.831 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:39.831 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:39.831 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:39.831 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:39.831 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:39.831 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:39.831 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:39.831 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:39.831 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:39.831 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:39.831 ' 00:37:46.419 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:46.419 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:46.419 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:46.419 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:46.419 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:46.419 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:46.419 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:46.419 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:46.419 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:46.419 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:46.419 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:46.419 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:46.419 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:46.419 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 626464 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 626464 ']' 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 626464 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 626464 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 626464' 00:37:46.419 killing process with pid 626464 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 626464 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 626464 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 626464 ']' 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 626464 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 626464 ']' 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 626464 00:37:46.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (626464) - No such process 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 626464 is not found' 00:37:46.419 Process with pid 626464 is not found 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:46.419 00:37:46.419 real 0m18.191s 00:37:46.419 user 0m40.376s 00:37:46.419 sys 0m0.922s 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:46.419 09:55:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:46.419 ************************************ 00:37:46.419 END TEST spdkcli_nvmf_tcp 00:37:46.419 ************************************ 00:37:46.419 09:55:32 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:46.419 09:55:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:46.419 09:55:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:46.419 09:55:32 -- common/autotest_common.sh@10 -- # set +x 00:37:46.419 ************************************ 00:37:46.419 START TEST nvmf_identify_passthru 00:37:46.419 ************************************ 00:37:46.419 09:55:32 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:46.419 * Looking for test storage... 00:37:46.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:46.419 09:55:32 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:46.419 09:55:32 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:37:46.419 09:55:32 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:46.419 09:55:32 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:46.419 09:55:32 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:37:46.420 09:55:32 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:46.420 09:55:32 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:46.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.420 --rc genhtml_branch_coverage=1 00:37:46.420 --rc genhtml_function_coverage=1 00:37:46.420 --rc genhtml_legend=1 00:37:46.420 --rc geninfo_all_blocks=1 00:37:46.420 --rc geninfo_unexecuted_blocks=1 00:37:46.420 00:37:46.420 ' 00:37:46.420 09:55:32 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:46.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.420 --rc genhtml_branch_coverage=1 00:37:46.420 --rc genhtml_function_coverage=1 00:37:46.420 --rc genhtml_legend=1 00:37:46.420 --rc geninfo_all_blocks=1 00:37:46.420 --rc geninfo_unexecuted_blocks=1 00:37:46.420 00:37:46.420 ' 00:37:46.420 09:55:32 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:46.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.420 --rc genhtml_branch_coverage=1 00:37:46.420 --rc genhtml_function_coverage=1 00:37:46.420 --rc genhtml_legend=1 00:37:46.420 --rc geninfo_all_blocks=1 00:37:46.420 --rc geninfo_unexecuted_blocks=1 00:37:46.420 00:37:46.420 ' 00:37:46.420 09:55:32 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:46.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.420 --rc genhtml_branch_coverage=1 00:37:46.420 --rc genhtml_function_coverage=1 00:37:46.420 --rc genhtml_legend=1 00:37:46.420 --rc geninfo_all_blocks=1 00:37:46.420 --rc geninfo_unexecuted_blocks=1 00:37:46.420 00:37:46.420 ' 00:37:46.420 09:55:32 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:46.420 09:55:32 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:46.420 09:55:32 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:46.420 09:55:32 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:46.420 09:55:32 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:46.420 09:55:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.420 09:55:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.420 09:55:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.420 09:55:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:46.420 09:55:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:46.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:46.420 09:55:32 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:46.420 09:55:32 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:46.420 09:55:32 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:46.420 09:55:32 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:46.420 09:55:32 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:46.420 09:55:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.420 09:55:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.420 09:55:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.420 09:55:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:46.420 09:55:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.420 09:55:32 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:46.420 09:55:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:46.420 09:55:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:46.420 09:55:32 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:37:46.420 09:55:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:53.009 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:53.009 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:53.009 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:53.009 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:53.009 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:53.270 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:53.270 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:53.270 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:53.271 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:53.271 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:53.271 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:53.271 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:53.271 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:53.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:53.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:37:53.271 00:37:53.271 --- 10.0.0.2 ping statistics --- 00:37:53.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:53.271 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:37:53.271 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:53.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:53.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:37:53.271 00:37:53.271 --- 10.0.0.1 ping statistics --- 00:37:53.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:53.271 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:37:53.271 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:53.271 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:37:53.271 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:53.271 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:53.271 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:53.271 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:53.271 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:53.271 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:53.271 09:55:39 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:53.533 09:55:40 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:53.533 09:55:40 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:53.533 09:55:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:53.533 09:55:40 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:53.533 09:55:40 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:37:53.533 09:55:40 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:37:53.533 09:55:40 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:37:53.533 09:55:40 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:37:53.533 09:55:40 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:37:53.533 09:55:40 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:37:53.533 09:55:40 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:53.533 09:55:40 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:53.533 09:55:40 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:37:53.533 09:55:40 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:37:53.533 09:55:40 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:37:53.533 09:55:40 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:37:53.533 09:55:40 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:37:53.533 09:55:40 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:37:53.533 09:55:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:53.533 09:55:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:53.533 09:55:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:54.106 09:55:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:37:54.106 09:55:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:54.106 09:55:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:54.106 09:55:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:54.680 09:55:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:37:54.680 09:55:41 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:54.680 09:55:41 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:54.680 09:55:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:54.680 09:55:41 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:54.680 09:55:41 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:54.680 09:55:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:54.681 09:55:41 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=633807 00:37:54.681 09:55:41 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:54.681 09:55:41 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:54.681 09:55:41 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 633807 00:37:54.681 09:55:41 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 633807 ']' 00:37:54.681 09:55:41 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:54.681 09:55:41 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:54.681 09:55:41 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:54.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:54.681 09:55:41 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:54.681 09:55:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:54.681 [2024-11-19 09:55:41.254858] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:54.681 [2024-11-19 09:55:41.254935] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:54.681 [2024-11-19 09:55:41.342579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:54.681 [2024-11-19 09:55:41.398169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:54.681 [2024-11-19 09:55:41.398228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:54.681 [2024-11-19 09:55:41.398237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:54.681 [2024-11-19 09:55:41.398244] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:54.681 [2024-11-19 09:55:41.398250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:54.681 [2024-11-19 09:55:41.402191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:54.681 [2024-11-19 09:55:41.402279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:54.681 [2024-11-19 09:55:41.402444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:54.681 [2024-11-19 09:55:41.402445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:55.627 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:55.627 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:37:55.627 09:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:55.627 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.627 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:55.627 INFO: Log level set to 20 00:37:55.627 INFO: Requests: 00:37:55.627 { 00:37:55.627 "jsonrpc": "2.0", 00:37:55.627 "method": "nvmf_set_config", 00:37:55.627 "id": 1, 00:37:55.627 "params": { 00:37:55.627 "admin_cmd_passthru": { 00:37:55.627 "identify_ctrlr": true 00:37:55.627 } 00:37:55.627 } 00:37:55.627 } 00:37:55.627 00:37:55.627 INFO: response: 00:37:55.627 { 00:37:55.627 "jsonrpc": "2.0", 00:37:55.627 "id": 1, 00:37:55.627 "result": true 00:37:55.627 } 00:37:55.627 00:37:55.627 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.627 09:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:55.627 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.627 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:55.627 INFO: Setting log level to 20 00:37:55.627 INFO: Setting log level to 20 00:37:55.627 INFO: Log level set to 20 00:37:55.627 INFO: Log level set to 20 00:37:55.627 INFO: Requests: 00:37:55.627 { 00:37:55.627 "jsonrpc": "2.0", 00:37:55.627 "method": "framework_start_init", 00:37:55.627 "id": 1 00:37:55.627 } 00:37:55.627 00:37:55.627 INFO: Requests: 00:37:55.627 { 00:37:55.627 "jsonrpc": "2.0", 00:37:55.627 "method": "framework_start_init", 00:37:55.627 "id": 1 00:37:55.627 } 00:37:55.627 00:37:55.627 [2024-11-19 09:55:42.164793] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:55.627 INFO: response: 00:37:55.627 { 00:37:55.627 "jsonrpc": "2.0", 00:37:55.627 "id": 1, 00:37:55.627 "result": true 00:37:55.627 } 00:37:55.627 00:37:55.627 INFO: response: 00:37:55.628 { 00:37:55.628 "jsonrpc": "2.0", 00:37:55.628 "id": 1, 00:37:55.628 "result": true 00:37:55.628 } 00:37:55.628 00:37:55.628 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.628 09:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:55.628 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.628 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:55.628 INFO: Setting log level to 40 00:37:55.628 INFO: Setting log level to 40 00:37:55.628 INFO: Setting log level to 40 00:37:55.628 [2024-11-19 09:55:42.178362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:55.628 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.628 09:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:55.628 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:55.628 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:55.628 09:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:37:55.628 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.628 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:55.890 Nvme0n1 00:37:55.890 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.890 09:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:55.890 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.890 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:55.890 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.890 09:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:55.890 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.890 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:55.890 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.890 09:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:55.890 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.890 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:55.890 [2024-11-19 09:55:42.577069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:55.890 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.890 09:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:55.890 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.890 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:55.890 [ 00:37:55.890 { 00:37:55.890 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:55.890 "subtype": "Discovery", 00:37:55.890 "listen_addresses": [], 00:37:55.890 "allow_any_host": true, 00:37:55.890 "hosts": [] 00:37:55.890 }, 00:37:55.890 { 00:37:55.890 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:55.890 "subtype": "NVMe", 00:37:55.890 "listen_addresses": [ 00:37:55.890 { 00:37:55.890 "trtype": "TCP", 00:37:55.890 "adrfam": "IPv4", 00:37:55.890 "traddr": "10.0.0.2", 00:37:55.890 "trsvcid": "4420" 00:37:55.890 } 00:37:55.890 ], 00:37:55.890 "allow_any_host": true, 00:37:55.890 "hosts": [], 00:37:55.890 "serial_number": "SPDK00000000000001", 00:37:55.890 "model_number": "SPDK bdev Controller", 00:37:55.890 "max_namespaces": 1, 00:37:55.890 "min_cntlid": 1, 00:37:55.890 "max_cntlid": 65519, 00:37:55.890 "namespaces": [ 00:37:55.890 { 00:37:55.890 "nsid": 1, 00:37:55.890 "bdev_name": "Nvme0n1", 00:37:55.890 "name": "Nvme0n1", 00:37:55.890 "nguid": "36344730526054870025384500000044", 00:37:55.890 "uuid": "36344730-5260-5487-0025-384500000044" 00:37:55.890 } 00:37:55.890 ] 00:37:55.890 } 00:37:55.890 ] 00:37:55.890 09:55:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.890 09:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:55.890 09:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:55.890 09:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:56.151 09:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:37:56.151 09:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:56.151 09:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:56.151 09:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:56.413 09:55:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:37:56.413 09:55:43 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:37:56.413 09:55:43 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:37:56.413 09:55:43 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:56.413 09:55:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.413 09:55:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:56.413 09:55:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.413 09:55:43 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:56.413 09:55:43 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:56.413 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:56.413 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:37:56.413 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:56.413 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:37:56.413 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:56.413 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:56.413 rmmod nvme_tcp 00:37:56.413 rmmod nvme_fabrics 00:37:56.413 rmmod nvme_keyring 00:37:56.413 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:56.674 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:37:56.674 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:37:56.674 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 633807 ']' 00:37:56.674 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 633807 00:37:56.674 09:55:43 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 633807 ']' 00:37:56.674 09:55:43 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 633807 00:37:56.674 09:55:43 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:37:56.674 09:55:43 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:56.674 09:55:43 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 633807 00:37:56.674 09:55:43 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:56.674 09:55:43 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:56.674 09:55:43 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 633807' 00:37:56.674 killing process with pid 633807 00:37:56.674 09:55:43 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 633807 00:37:56.674 09:55:43 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 633807 00:37:56.935 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:56.935 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:56.935 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:56.935 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:37:56.935 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:37:56.935 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:56.935 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:37:56.935 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:56.935 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:56.935 09:55:43 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:56.935 09:55:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:56.935 09:55:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:58.850 09:55:45 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:58.850 00:37:58.850 real 0m13.246s 00:37:58.850 user 0m10.568s 00:37:58.850 sys 0m6.742s 00:37:58.850 09:55:45 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:58.850 09:55:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:58.850 ************************************ 00:37:58.850 END TEST nvmf_identify_passthru 00:37:58.850 ************************************ 00:37:59.111 09:55:45 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:59.111 09:55:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:59.111 09:55:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:59.111 09:55:45 -- common/autotest_common.sh@10 -- # set +x 00:37:59.111 ************************************ 00:37:59.111 START TEST nvmf_dif 00:37:59.111 ************************************ 00:37:59.111 09:55:45 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:59.111 * Looking for test storage... 00:37:59.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:59.111 09:55:45 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:59.111 09:55:45 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:37:59.111 09:55:45 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:59.111 09:55:45 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:37:59.111 09:55:45 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:37:59.112 09:55:45 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:37:59.112 09:55:45 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:59.112 09:55:45 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:37:59.112 09:55:45 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:37:59.112 09:55:45 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:59.112 09:55:45 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:59.112 09:55:45 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:37:59.112 09:55:45 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:59.112 09:55:45 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:59.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.112 --rc genhtml_branch_coverage=1 00:37:59.112 --rc genhtml_function_coverage=1 00:37:59.112 --rc genhtml_legend=1 00:37:59.112 --rc geninfo_all_blocks=1 00:37:59.112 --rc geninfo_unexecuted_blocks=1 00:37:59.112 00:37:59.112 ' 00:37:59.112 09:55:45 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:59.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.112 --rc genhtml_branch_coverage=1 00:37:59.112 --rc genhtml_function_coverage=1 00:37:59.112 --rc genhtml_legend=1 00:37:59.112 --rc geninfo_all_blocks=1 00:37:59.112 --rc geninfo_unexecuted_blocks=1 00:37:59.112 00:37:59.112 ' 00:37:59.112 09:55:45 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:59.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.112 --rc genhtml_branch_coverage=1 00:37:59.112 --rc genhtml_function_coverage=1 00:37:59.112 --rc genhtml_legend=1 00:37:59.112 --rc geninfo_all_blocks=1 00:37:59.112 --rc geninfo_unexecuted_blocks=1 00:37:59.112 00:37:59.112 ' 00:37:59.112 09:55:45 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:59.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.112 --rc genhtml_branch_coverage=1 00:37:59.112 --rc genhtml_function_coverage=1 00:37:59.112 --rc genhtml_legend=1 00:37:59.112 --rc geninfo_all_blocks=1 00:37:59.112 --rc geninfo_unexecuted_blocks=1 00:37:59.112 00:37:59.112 ' 00:37:59.112 09:55:45 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:59.112 09:55:45 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:59.112 09:55:45 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:59.112 09:55:45 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:59.112 09:55:45 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:59.112 09:55:45 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:59.112 09:55:45 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:59.112 09:55:45 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:59.112 09:55:45 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:59.112 09:55:45 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:59.112 09:55:45 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:59.112 09:55:45 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:59.373 09:55:45 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:37:59.373 09:55:45 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:59.373 09:55:45 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:59.373 09:55:45 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:59.373 09:55:45 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.373 09:55:45 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.373 09:55:45 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.373 09:55:45 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:59.373 09:55:45 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:59.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:59.373 09:55:45 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:59.373 09:55:45 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:59.373 09:55:45 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:59.373 09:55:45 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:59.373 09:55:45 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:59.373 09:55:45 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:59.373 09:55:45 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:59.373 09:55:45 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:37:59.373 09:55:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:07.522 09:55:52 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:07.523 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:07.523 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:07.523 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:07.523 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:07.523 09:55:52 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:07.523 09:55:53 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:07.523 09:55:53 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:07.523 09:55:53 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:07.523 09:55:53 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:07.523 09:55:53 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:07.523 09:55:53 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:07.523 09:55:53 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:07.523 09:55:53 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:07.523 09:55:53 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:07.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:07.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:38:07.523 00:38:07.523 --- 10.0.0.2 ping statistics --- 00:38:07.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:07.523 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:38:07.523 09:55:53 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:07.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:07.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:38:07.523 00:38:07.523 --- 10.0.0.1 ping statistics --- 00:38:07.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:07.523 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:38:07.523 09:55:53 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:07.523 09:55:53 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:38:07.523 09:55:53 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:07.523 09:55:53 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:10.072 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:10.072 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:10.072 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:10.072 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:10.072 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:10.072 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:10.072 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:10.072 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:10.072 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:10.072 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:38:10.072 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:10.072 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:10.072 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:10.072 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:10.072 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:10.072 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:10.072 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:10.334 09:55:57 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:10.334 09:55:57 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:10.334 09:55:57 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:10.334 09:55:57 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:10.334 09:55:57 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:10.334 09:55:57 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:10.334 09:55:57 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:10.334 09:55:57 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:10.334 09:55:57 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:10.334 09:55:57 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:10.334 09:55:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:10.595 09:55:57 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=639984 00:38:10.595 09:55:57 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 639984 00:38:10.595 09:55:57 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:10.595 09:55:57 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 639984 ']' 00:38:10.595 09:55:57 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:10.595 09:55:57 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:10.595 09:55:57 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:10.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:10.595 09:55:57 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:10.595 09:55:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:10.595 [2024-11-19 09:55:57.137134] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:38:10.595 [2024-11-19 09:55:57.137188] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:10.595 [2024-11-19 09:55:57.231799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.595 [2024-11-19 09:55:57.266473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:10.595 [2024-11-19 09:55:57.266506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:10.595 [2024-11-19 09:55:57.266518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:10.595 [2024-11-19 09:55:57.266524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:10.595 [2024-11-19 09:55:57.266530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:10.595 [2024-11-19 09:55:57.267103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:11.540 09:55:57 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:11.540 09:55:57 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:38:11.540 09:55:57 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:11.540 09:55:57 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:11.540 09:55:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:11.540 09:55:57 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:11.540 09:55:57 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:11.540 09:55:57 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:11.540 09:55:57 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.540 09:55:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:11.540 [2024-11-19 09:55:57.979196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:11.540 09:55:57 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.540 09:55:57 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:11.540 09:55:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:11.540 09:55:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:11.540 09:55:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:11.540 ************************************ 00:38:11.540 START TEST fio_dif_1_default 00:38:11.540 ************************************ 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:11.540 bdev_null0 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:11.540 [2024-11-19 09:55:58.067548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:11.540 { 00:38:11.540 "params": { 00:38:11.540 "name": "Nvme$subsystem", 00:38:11.540 "trtype": "$TEST_TRANSPORT", 00:38:11.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:11.540 "adrfam": "ipv4", 00:38:11.540 "trsvcid": "$NVMF_PORT", 00:38:11.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:11.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:11.540 "hdgst": ${hdgst:-false}, 00:38:11.540 "ddgst": ${ddgst:-false} 00:38:11.540 }, 00:38:11.540 "method": "bdev_nvme_attach_controller" 00:38:11.540 } 00:38:11.540 EOF 00:38:11.540 )") 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:11.540 "params": { 00:38:11.540 "name": "Nvme0", 00:38:11.540 "trtype": "tcp", 00:38:11.540 "traddr": "10.0.0.2", 00:38:11.540 "adrfam": "ipv4", 00:38:11.540 "trsvcid": "4420", 00:38:11.540 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:11.540 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:11.540 "hdgst": false, 00:38:11.540 "ddgst": false 00:38:11.540 }, 00:38:11.540 "method": "bdev_nvme_attach_controller" 00:38:11.540 }' 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:11.540 09:55:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:11.801 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:11.801 fio-3.35 00:38:11.801 Starting 1 thread 00:38:24.045 00:38:24.045 filename0: (groupid=0, jobs=1): err= 0: pid=640514: Tue Nov 19 09:56:09 2024 00:38:24.045 read: IOPS=191, BW=764KiB/s (782kB/s)(7664KiB/10030msec) 00:38:24.045 slat (nsec): min=5404, max=35980, avg=6127.86, stdev=1498.92 00:38:24.045 clat (usec): min=513, max=43003, avg=20922.86, stdev=20196.30 00:38:24.045 lat (usec): min=519, max=43039, avg=20928.99, stdev=20196.25 00:38:24.045 clat percentiles (usec): 00:38:24.045 | 1.00th=[ 603], 5.00th=[ 807], 10.00th=[ 824], 20.00th=[ 848], 00:38:24.045 | 30.00th=[ 857], 40.00th=[ 922], 50.00th=[ 1029], 60.00th=[41157], 00:38:24.045 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:38:24.045 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[43254], 00:38:24.045 | 99.99th=[43254] 00:38:24.045 bw ( KiB/s): min= 704, max= 832, per=99.99%, avg=764.80, stdev=22.98, samples=20 00:38:24.045 iops : min= 176, max= 208, avg=191.20, stdev= 5.75, samples=20 00:38:24.045 lat (usec) : 750=2.30%, 1000=46.40% 00:38:24.045 lat (msec) : 2=1.62%, 50=49.69% 00:38:24.046 cpu : usr=93.44%, sys=6.33%, ctx=14, majf=0, minf=225 00:38:24.046 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:24.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.046 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.046 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:24.046 00:38:24.046 Run status group 0 (all jobs): 00:38:24.046 READ: bw=764KiB/s (782kB/s), 764KiB/s-764KiB/s (782kB/s-782kB/s), io=7664KiB (7848kB), run=10030-10030msec 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.046 00:38:24.046 real 0m11.336s 00:38:24.046 user 0m25.552s 00:38:24.046 sys 0m0.966s 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:24.046 ************************************ 00:38:24.046 END TEST fio_dif_1_default 00:38:24.046 ************************************ 00:38:24.046 09:56:09 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:24.046 09:56:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:24.046 09:56:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:24.046 09:56:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:24.046 ************************************ 00:38:24.046 START TEST fio_dif_1_multi_subsystems 00:38:24.046 ************************************ 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.046 bdev_null0 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.046 [2024-11-19 09:56:09.486463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.046 bdev_null1 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:24.046 { 00:38:24.046 "params": { 00:38:24.046 "name": "Nvme$subsystem", 00:38:24.046 "trtype": "$TEST_TRANSPORT", 00:38:24.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:24.046 "adrfam": "ipv4", 00:38:24.046 "trsvcid": "$NVMF_PORT", 00:38:24.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:24.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:24.046 "hdgst": ${hdgst:-false}, 00:38:24.046 "ddgst": ${ddgst:-false} 00:38:24.046 }, 00:38:24.046 "method": "bdev_nvme_attach_controller" 00:38:24.046 } 00:38:24.046 EOF 00:38:24.046 )") 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:24.046 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:24.046 { 00:38:24.046 "params": { 00:38:24.046 "name": "Nvme$subsystem", 00:38:24.046 "trtype": "$TEST_TRANSPORT", 00:38:24.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:24.046 "adrfam": "ipv4", 00:38:24.046 "trsvcid": "$NVMF_PORT", 00:38:24.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:24.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:24.047 "hdgst": ${hdgst:-false}, 00:38:24.047 "ddgst": ${ddgst:-false} 00:38:24.047 }, 00:38:24.047 "method": "bdev_nvme_attach_controller" 00:38:24.047 } 00:38:24.047 EOF 00:38:24.047 )") 00:38:24.047 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:24.047 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:24.047 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:24.047 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:38:24.047 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:38:24.047 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:24.047 "params": { 00:38:24.047 "name": "Nvme0", 00:38:24.047 "trtype": "tcp", 00:38:24.047 "traddr": "10.0.0.2", 00:38:24.047 "adrfam": "ipv4", 00:38:24.047 "trsvcid": "4420", 00:38:24.047 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.047 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:24.047 "hdgst": false, 00:38:24.047 "ddgst": false 00:38:24.047 }, 00:38:24.047 "method": "bdev_nvme_attach_controller" 00:38:24.047 },{ 00:38:24.047 "params": { 00:38:24.047 "name": "Nvme1", 00:38:24.047 "trtype": "tcp", 00:38:24.047 "traddr": "10.0.0.2", 00:38:24.047 "adrfam": "ipv4", 00:38:24.047 "trsvcid": "4420", 00:38:24.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:24.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:24.047 "hdgst": false, 00:38:24.047 "ddgst": false 00:38:24.047 }, 00:38:24.047 "method": "bdev_nvme_attach_controller" 00:38:24.047 }' 00:38:24.047 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:24.047 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:24.047 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:24.047 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.047 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:24.047 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:24.047 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:24.047 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:24.047 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:24.047 09:56:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:24.047 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:24.047 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:24.047 fio-3.35 00:38:24.047 Starting 2 threads 00:38:36.280 00:38:36.280 filename0: (groupid=0, jobs=1): err= 0: pid=642820: Tue Nov 19 09:56:20 2024 00:38:36.280 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10023msec) 00:38:36.280 slat (nsec): min=5412, max=45131, avg=6639.96, stdev=2325.32 00:38:36.280 clat (usec): min=40835, max=43024, avg=41057.40, stdev=285.32 00:38:36.280 lat (usec): min=40840, max=43029, avg=41064.04, stdev=285.92 00:38:36.280 clat percentiles (usec): 00:38:36.280 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:36.280 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:36.280 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:38:36.280 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:38:36.280 | 99.99th=[43254] 00:38:36.280 bw ( KiB/s): min= 384, max= 416, per=49.82%, avg=388.80, stdev=11.72, samples=20 00:38:36.280 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:38:36.280 lat (msec) : 50=100.00% 00:38:36.280 cpu : usr=95.67%, sys=4.11%, ctx=15, majf=0, minf=208 00:38:36.280 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:36.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.280 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.280 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:36.280 filename1: (groupid=0, jobs=1): err= 0: pid=642821: Tue Nov 19 09:56:20 2024 00:38:36.280 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10025msec) 00:38:36.280 slat (nsec): min=5408, max=32430, avg=6450.16, stdev=1780.89 00:38:36.280 clat (usec): min=40725, max=42539, avg=41066.45, stdev=284.25 00:38:36.280 lat (usec): min=40731, max=42571, avg=41072.90, stdev=284.84 00:38:36.280 clat percentiles (usec): 00:38:36.280 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:38:36.280 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:36.281 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:38:36.281 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:38:36.281 | 99.99th=[42730] 00:38:36.281 bw ( KiB/s): min= 384, max= 416, per=49.82%, avg=388.80, stdev=11.72, samples=20 00:38:36.281 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:38:36.281 lat (msec) : 50=100.00% 00:38:36.281 cpu : usr=95.45%, sys=4.33%, ctx=14, majf=0, minf=43 00:38:36.281 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:36.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.281 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.281 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:36.281 00:38:36.281 Run status group 0 (all jobs): 00:38:36.281 READ: bw=779KiB/s (798kB/s), 389KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10023-10025msec 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.281 00:38:36.281 real 0m11.673s 00:38:36.281 user 0m35.136s 00:38:36.281 sys 0m1.238s 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:36.281 09:56:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:36.281 ************************************ 00:38:36.281 END TEST fio_dif_1_multi_subsystems 00:38:36.281 ************************************ 00:38:36.281 09:56:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:36.281 09:56:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:36.281 09:56:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:36.281 09:56:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:36.281 ************************************ 00:38:36.281 START TEST fio_dif_rand_params 00:38:36.281 ************************************ 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.281 bdev_null0 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.281 [2024-11-19 09:56:21.241141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:36.281 { 00:38:36.281 "params": { 00:38:36.281 "name": "Nvme$subsystem", 00:38:36.281 "trtype": "$TEST_TRANSPORT", 00:38:36.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:36.281 "adrfam": "ipv4", 00:38:36.281 "trsvcid": "$NVMF_PORT", 00:38:36.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:36.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:36.281 "hdgst": ${hdgst:-false}, 00:38:36.281 "ddgst": ${ddgst:-false} 00:38:36.281 }, 00:38:36.281 "method": "bdev_nvme_attach_controller" 00:38:36.281 } 00:38:36.281 EOF 00:38:36.281 )") 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:36.281 09:56:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:36.281 "params": { 00:38:36.281 "name": "Nvme0", 00:38:36.281 "trtype": "tcp", 00:38:36.281 "traddr": "10.0.0.2", 00:38:36.281 "adrfam": "ipv4", 00:38:36.281 "trsvcid": "4420", 00:38:36.282 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:36.282 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:36.282 "hdgst": false, 00:38:36.282 "ddgst": false 00:38:36.282 }, 00:38:36.282 "method": "bdev_nvme_attach_controller" 00:38:36.282 }' 00:38:36.282 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:36.282 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:36.282 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:36.282 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:36.282 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:36.282 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:36.282 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:36.282 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:36.282 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:36.282 09:56:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:36.282 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:36.282 ... 00:38:36.282 fio-3.35 00:38:36.282 Starting 3 threads 00:38:41.570 00:38:41.570 filename0: (groupid=0, jobs=1): err= 0: pid=645221: Tue Nov 19 09:56:27 2024 00:38:41.570 read: IOPS=327, BW=41.0MiB/s (43.0MB/s)(207MiB/5045msec) 00:38:41.570 slat (nsec): min=5629, max=34950, avg=8493.82, stdev=1998.97 00:38:41.570 clat (usec): min=4838, max=87508, avg=9113.50, stdev=4909.39 00:38:41.570 lat (usec): min=4846, max=87516, avg=9122.00, stdev=4909.40 00:38:41.570 clat percentiles (usec): 00:38:41.570 | 1.00th=[ 5604], 5.00th=[ 6587], 10.00th=[ 7111], 20.00th=[ 7570], 00:38:41.571 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9110], 00:38:41.571 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10028], 95.00th=[10421], 00:38:41.571 | 99.00th=[46924], 99.50th=[47973], 99.90th=[50594], 99.95th=[87557], 00:38:41.571 | 99.99th=[87557] 00:38:41.571 bw ( KiB/s): min=31744, max=46336, per=35.35%, avg=42291.20, stdev=4308.78, samples=10 00:38:41.571 iops : min= 248, max= 362, avg=330.40, stdev=33.66, samples=10 00:38:41.571 lat (msec) : 10=89.84%, 20=8.83%, 50=1.21%, 100=0.12% 00:38:41.571 cpu : usr=95.52%, sys=4.22%, ctx=10, majf=0, minf=101 00:38:41.571 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:41.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.571 issued rwts: total=1654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:41.571 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:41.571 filename0: (groupid=0, jobs=1): err= 0: pid=645222: Tue Nov 19 09:56:27 2024 00:38:41.571 read: IOPS=310, BW=38.8MiB/s (40.7MB/s)(196MiB/5046msec) 00:38:41.571 slat (nsec): min=5475, max=38680, avg=7363.48, stdev=1945.77 00:38:41.571 clat (usec): min=5132, max=49594, avg=9634.67, stdev=4738.99 00:38:41.571 lat (usec): min=5140, max=49600, avg=9642.03, stdev=4739.24 00:38:41.571 clat percentiles (usec): 00:38:41.571 | 1.00th=[ 5932], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 8094], 00:38:41.571 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:38:41.571 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10421], 95.00th=[10814], 00:38:41.571 | 99.00th=[46400], 99.50th=[47973], 99.90th=[49021], 99.95th=[49546], 00:38:41.571 | 99.99th=[49546] 00:38:41.571 bw ( KiB/s): min=32191, max=44288, per=33.44%, avg=40006.30, stdev=3488.81, samples=10 00:38:41.571 iops : min= 251, max= 346, avg=312.50, stdev=27.38, samples=10 00:38:41.571 lat (msec) : 10=79.42%, 20=19.11%, 50=1.47% 00:38:41.571 cpu : usr=96.29%, sys=3.45%, ctx=15, majf=0, minf=150 00:38:41.571 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:41.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.571 issued rwts: total=1565,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:41.571 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:41.571 filename0: (groupid=0, jobs=1): err= 0: pid=645223: Tue Nov 19 09:56:27 2024 00:38:41.571 read: IOPS=296, BW=37.1MiB/s (38.9MB/s)(187MiB/5048msec) 00:38:41.571 slat (nsec): min=5498, max=32137, avg=7287.63, stdev=1244.80 00:38:41.571 clat (usec): min=4936, max=50182, avg=10062.96, stdev=5726.14 00:38:41.571 lat (usec): min=4942, max=50189, avg=10070.24, stdev=5726.26 00:38:41.571 clat percentiles (usec): 00:38:41.571 | 1.00th=[ 5735], 5.00th=[ 7308], 10.00th=[ 7635], 20.00th=[ 8225], 00:38:41.571 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:38:41.571 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10814], 95.00th=[11207], 00:38:41.571 | 99.00th=[47973], 99.50th=[49021], 99.90th=[50070], 99.95th=[50070], 00:38:41.571 | 99.99th=[50070] 00:38:41.571 bw ( KiB/s): min=25907, max=42752, per=32.02%, avg=38302.70, stdev=5191.38, samples=10 00:38:41.571 iops : min= 202, max= 334, avg=299.20, stdev=40.66, samples=10 00:38:41.571 lat (msec) : 10=72.72%, 20=25.15%, 50=2.00%, 100=0.13% 00:38:41.571 cpu : usr=95.44%, sys=4.30%, ctx=8, majf=0, minf=104 00:38:41.571 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:41.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.571 issued rwts: total=1499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:41.571 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:41.571 00:38:41.571 Run status group 0 (all jobs): 00:38:41.571 READ: bw=117MiB/s (123MB/s), 37.1MiB/s-41.0MiB/s (38.9MB/s-43.0MB/s), io=590MiB (618MB), run=5045-5048msec 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.571 bdev_null0 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.571 [2024-11-19 09:56:27.548487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:41.571 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.572 bdev_null1 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.572 bdev_null2 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:41.572 { 00:38:41.572 "params": { 00:38:41.572 "name": "Nvme$subsystem", 00:38:41.572 "trtype": "$TEST_TRANSPORT", 00:38:41.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:41.572 "adrfam": "ipv4", 00:38:41.572 "trsvcid": "$NVMF_PORT", 00:38:41.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:41.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:41.572 "hdgst": ${hdgst:-false}, 00:38:41.572 "ddgst": ${ddgst:-false} 00:38:41.572 }, 00:38:41.572 "method": "bdev_nvme_attach_controller" 00:38:41.572 } 00:38:41.572 EOF 00:38:41.572 )") 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:41.572 { 00:38:41.572 "params": { 00:38:41.572 "name": "Nvme$subsystem", 00:38:41.572 "trtype": "$TEST_TRANSPORT", 00:38:41.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:41.572 "adrfam": "ipv4", 00:38:41.572 "trsvcid": "$NVMF_PORT", 00:38:41.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:41.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:41.572 "hdgst": ${hdgst:-false}, 00:38:41.572 "ddgst": ${ddgst:-false} 00:38:41.572 }, 00:38:41.572 "method": "bdev_nvme_attach_controller" 00:38:41.572 } 00:38:41.572 EOF 00:38:41.572 )") 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:41.572 { 00:38:41.572 "params": { 00:38:41.572 "name": "Nvme$subsystem", 00:38:41.572 "trtype": "$TEST_TRANSPORT", 00:38:41.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:41.572 "adrfam": "ipv4", 00:38:41.572 "trsvcid": "$NVMF_PORT", 00:38:41.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:41.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:41.572 "hdgst": ${hdgst:-false}, 00:38:41.572 "ddgst": ${ddgst:-false} 00:38:41.572 }, 00:38:41.572 "method": "bdev_nvme_attach_controller" 00:38:41.572 } 00:38:41.572 EOF 00:38:41.572 )") 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:41.572 09:56:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:41.572 "params": { 00:38:41.572 "name": "Nvme0", 00:38:41.573 "trtype": "tcp", 00:38:41.573 "traddr": "10.0.0.2", 00:38:41.573 "adrfam": "ipv4", 00:38:41.573 "trsvcid": "4420", 00:38:41.573 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:41.573 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:41.573 "hdgst": false, 00:38:41.573 "ddgst": false 00:38:41.573 }, 00:38:41.573 "method": "bdev_nvme_attach_controller" 00:38:41.573 },{ 00:38:41.573 "params": { 00:38:41.573 "name": "Nvme1", 00:38:41.573 "trtype": "tcp", 00:38:41.573 "traddr": "10.0.0.2", 00:38:41.573 "adrfam": "ipv4", 00:38:41.573 "trsvcid": "4420", 00:38:41.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:41.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:41.573 "hdgst": false, 00:38:41.573 "ddgst": false 00:38:41.573 }, 00:38:41.573 "method": "bdev_nvme_attach_controller" 00:38:41.573 },{ 00:38:41.573 "params": { 00:38:41.573 "name": "Nvme2", 00:38:41.573 "trtype": "tcp", 00:38:41.573 "traddr": "10.0.0.2", 00:38:41.573 "adrfam": "ipv4", 00:38:41.573 "trsvcid": "4420", 00:38:41.573 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:41.573 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:41.573 "hdgst": false, 00:38:41.573 "ddgst": false 00:38:41.573 }, 00:38:41.573 "method": "bdev_nvme_attach_controller" 00:38:41.573 }' 00:38:41.573 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:41.573 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:41.573 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:41.573 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:41.573 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:41.573 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:41.573 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:41.573 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:41.573 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:41.573 09:56:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:41.573 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:41.573 ... 00:38:41.573 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:41.573 ... 00:38:41.573 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:41.573 ... 00:38:41.573 fio-3.35 00:38:41.573 Starting 24 threads 00:38:53.813 00:38:53.813 filename0: (groupid=0, jobs=1): err= 0: pid=646552: Tue Nov 19 09:56:39 2024 00:38:53.813 read: IOPS=717, BW=2871KiB/s (2940kB/s)(28.4MiB/10122msec) 00:38:53.813 slat (nsec): min=5561, max=88646, avg=12339.53, stdev=11876.41 00:38:53.813 clat (msec): min=6, max=131, avg=22.19, stdev= 6.40 00:38:53.813 lat (msec): min=6, max=131, avg=22.20, stdev= 6.40 00:38:53.813 clat percentiles (msec): 00:38:53.813 | 1.00th=[ 11], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 18], 00:38:53.813 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.813 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:38:53.813 | 99.00th=[ 28], 99.50th=[ 34], 99.90th=[ 132], 99.95th=[ 132], 00:38:53.813 | 99.99th=[ 132] 00:38:53.813 bw ( KiB/s): min= 2560, max= 3904, per=4.55%, avg=2900.00, stdev=439.07, samples=20 00:38:53.813 iops : min= 640, max= 976, avg=725.00, stdev=109.77, samples=20 00:38:53.813 lat (msec) : 10=0.92%, 20=24.43%, 50=74.43%, 250=0.22% 00:38:53.813 cpu : usr=99.10%, sys=0.61%, ctx=7, majf=0, minf=41 00:38:53.813 IO depths : 1=4.2%, 2=8.8%, 4=20.0%, 8=58.7%, 16=8.4%, 32=0.0%, >=64=0.0% 00:38:53.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.813 complete : 0=0.0%, 4=92.7%, 8=1.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.813 issued rwts: total=7266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.813 filename0: (groupid=0, jobs=1): err= 0: pid=646553: Tue Nov 19 09:56:39 2024 00:38:53.813 read: IOPS=684, BW=2740KiB/s (2806kB/s)(26.8MiB/10008msec) 00:38:53.813 slat (usec): min=3, max=109, avg=24.07, stdev=19.33 00:38:53.813 clat (usec): min=1099, max=31318, avg=23119.66, stdev=3849.75 00:38:53.813 lat (usec): min=1107, max=31348, avg=23143.72, stdev=3852.08 00:38:53.813 clat percentiles (usec): 00:38:53.813 | 1.00th=[ 1418], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:38:53.813 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:38:53.813 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:38:53.813 | 99.00th=[25297], 99.50th=[25560], 99.90th=[26346], 99.95th=[26346], 00:38:53.813 | 99.99th=[31327] 00:38:53.813 bw ( KiB/s): min= 2560, max= 4280, per=4.31%, avg=2744.84, stdev=375.54, samples=19 00:38:53.813 iops : min= 640, max= 1070, avg=686.21, stdev=93.89, samples=19 00:38:53.813 lat (msec) : 2=2.07%, 4=0.36%, 10=0.70%, 20=1.20%, 50=95.67% 00:38:53.813 cpu : usr=98.66%, sys=0.83%, ctx=108, majf=0, minf=40 00:38:53.813 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:53.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.813 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.813 issued rwts: total=6855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.813 filename0: (groupid=0, jobs=1): err= 0: pid=646555: Tue Nov 19 09:56:39 2024 00:38:53.813 read: IOPS=654, BW=2618KiB/s (2681kB/s)(25.8MiB/10073msec) 00:38:53.813 slat (nsec): min=5737, max=91966, avg=23471.10, stdev=11870.82 00:38:53.813 clat (msec): min=16, max=131, avg=24.24, stdev= 5.38 00:38:53.813 lat (msec): min=16, max=131, avg=24.26, stdev= 5.37 00:38:53.813 clat percentiles (msec): 00:38:53.813 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:38:53.813 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.813 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:38:53.813 | 99.00th=[ 26], 99.50th=[ 31], 99.90th=[ 131], 99.95th=[ 131], 00:38:53.813 | 99.99th=[ 131] 00:38:53.813 bw ( KiB/s): min= 2304, max= 2704, per=4.13%, avg=2629.50, stdev=98.29, samples=20 00:38:53.813 iops : min= 576, max= 676, avg=657.30, stdev=24.57, samples=20 00:38:53.813 lat (msec) : 20=0.09%, 50=99.64%, 100=0.03%, 250=0.24% 00:38:53.813 cpu : usr=98.88%, sys=0.83%, ctx=9, majf=0, minf=40 00:38:53.813 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:38:53.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.814 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.814 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.814 filename0: (groupid=0, jobs=1): err= 0: pid=646556: Tue Nov 19 09:56:39 2024 00:38:53.814 read: IOPS=660, BW=2642KiB/s (2706kB/s)(26.0MiB/10085msec) 00:38:53.814 slat (nsec): min=5244, max=77117, avg=17791.69, stdev=11024.98 00:38:53.814 clat (msec): min=9, max=129, avg=24.06, stdev= 4.78 00:38:53.814 lat (msec): min=10, max=129, avg=24.08, stdev= 4.78 00:38:53.814 clat percentiles (msec): 00:38:53.814 | 1.00th=[ 16], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 24], 00:38:53.814 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.814 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:38:53.814 | 99.00th=[ 34], 99.50th=[ 41], 99.90th=[ 107], 99.95th=[ 107], 00:38:53.814 | 99.99th=[ 130] 00:38:53.814 bw ( KiB/s): min= 2560, max= 2842, per=4.17%, avg=2657.80, stdev=73.46, samples=20 00:38:53.814 iops : min= 640, max= 710, avg=664.40, stdev=18.29, samples=20 00:38:53.814 lat (msec) : 10=0.02%, 20=3.90%, 50=95.84%, 250=0.24% 00:38:53.814 cpu : usr=99.01%, sys=0.66%, ctx=84, majf=0, minf=33 00:38:53.814 IO depths : 1=5.3%, 2=10.9%, 4=22.7%, 8=53.7%, 16=7.4%, 32=0.0%, >=64=0.0% 00:38:53.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.814 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.814 issued rwts: total=6662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.814 filename0: (groupid=0, jobs=1): err= 0: pid=646557: Tue Nov 19 09:56:39 2024 00:38:53.814 read: IOPS=681, BW=2727KiB/s (2793kB/s)(26.9MiB/10117msec) 00:38:53.814 slat (nsec): min=5566, max=79732, avg=15323.18, stdev=10864.33 00:38:53.814 clat (msec): min=8, max=127, avg=23.27, stdev= 5.10 00:38:53.814 lat (msec): min=8, max=127, avg=23.29, stdev= 5.10 00:38:53.814 clat percentiles (msec): 00:38:53.814 | 1.00th=[ 14], 5.00th=[ 17], 10.00th=[ 19], 20.00th=[ 22], 00:38:53.814 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.814 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 28], 00:38:53.814 | 99.00th=[ 36], 99.50th=[ 40], 99.90th=[ 82], 99.95th=[ 128], 00:38:53.814 | 99.99th=[ 128] 00:38:53.814 bw ( KiB/s): min= 2560, max= 3014, per=4.32%, avg=2753.10, stdev=139.12, samples=20 00:38:53.814 iops : min= 640, max= 753, avg=688.25, stdev=34.73, samples=20 00:38:53.814 lat (msec) : 10=0.10%, 20=17.25%, 50=82.42%, 100=0.14%, 250=0.09% 00:38:53.814 cpu : usr=98.82%, sys=0.87%, ctx=21, majf=0, minf=20 00:38:53.814 IO depths : 1=3.3%, 2=6.7%, 4=15.1%, 8=64.7%, 16=10.2%, 32=0.0%, >=64=0.0% 00:38:53.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.814 complete : 0=0.0%, 4=91.5%, 8=3.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.814 issued rwts: total=6898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.814 filename0: (groupid=0, jobs=1): err= 0: pid=646558: Tue Nov 19 09:56:39 2024 00:38:53.814 read: IOPS=659, BW=2637KiB/s (2700kB/s)(26.0MiB/10090msec) 00:38:53.814 slat (nsec): min=5562, max=88502, avg=21434.95, stdev=14235.95 00:38:53.814 clat (msec): min=10, max=131, avg=24.08, stdev= 5.70 00:38:53.814 lat (msec): min=10, max=131, avg=24.10, stdev= 5.70 00:38:53.814 clat percentiles (msec): 00:38:53.814 | 1.00th=[ 15], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 24], 00:38:53.814 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.814 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:38:53.814 | 99.00th=[ 36], 99.50th=[ 36], 99.90th=[ 132], 99.95th=[ 132], 00:38:53.814 | 99.99th=[ 132] 00:38:53.814 bw ( KiB/s): min= 2432, max= 2864, per=4.16%, avg=2653.80, stdev=87.86, samples=20 00:38:53.814 iops : min= 608, max= 716, avg=663.40, stdev=21.99, samples=20 00:38:53.814 lat (msec) : 20=3.52%, 50=96.24%, 250=0.24% 00:38:53.814 cpu : usr=98.54%, sys=0.98%, ctx=132, majf=0, minf=24 00:38:53.814 IO depths : 1=5.5%, 2=11.3%, 4=23.8%, 8=52.3%, 16=7.1%, 32=0.0%, >=64=0.0% 00:38:53.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.814 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.814 issued rwts: total=6652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.814 filename0: (groupid=0, jobs=1): err= 0: pid=646559: Tue Nov 19 09:56:39 2024 00:38:53.814 read: IOPS=645, BW=2583KiB/s (2644kB/s)(25.4MiB/10077msec) 00:38:53.814 slat (nsec): min=5558, max=88537, avg=17215.69, stdev=14013.46 00:38:53.814 clat (msec): min=11, max=127, avg=24.62, stdev= 5.11 00:38:53.814 lat (msec): min=11, max=127, avg=24.64, stdev= 5.11 00:38:53.814 clat percentiles (msec): 00:38:53.814 | 1.00th=[ 18], 5.00th=[ 20], 10.00th=[ 21], 20.00th=[ 24], 00:38:53.814 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 25], 00:38:53.814 | 70.00th=[ 25], 80.00th=[ 26], 90.00th=[ 29], 95.00th=[ 31], 00:38:53.814 | 99.00th=[ 38], 99.50th=[ 41], 99.90th=[ 83], 99.95th=[ 128], 00:38:53.814 | 99.99th=[ 128] 00:38:53.814 bw ( KiB/s): min= 2256, max= 2720, per=4.07%, avg=2595.10, stdev=102.79, samples=20 00:38:53.814 iops : min= 564, max= 680, avg=648.70, stdev=25.69, samples=20 00:38:53.814 lat (msec) : 20=7.69%, 50=92.07%, 100=0.15%, 250=0.09% 00:38:53.814 cpu : usr=98.51%, sys=0.95%, ctx=70, majf=0, minf=36 00:38:53.814 IO depths : 1=0.4%, 2=0.9%, 4=4.7%, 8=78.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:38:53.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.814 complete : 0=0.0%, 4=89.5%, 8=8.1%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.814 issued rwts: total=6506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.814 filename0: (groupid=0, jobs=1): err= 0: pid=646560: Tue Nov 19 09:56:39 2024 00:38:53.814 read: IOPS=657, BW=2628KiB/s (2691kB/s)(25.9MiB/10105msec) 00:38:53.814 slat (usec): min=5, max=107, avg=20.51, stdev=15.78 00:38:53.814 clat (msec): min=11, max=130, avg=24.18, stdev= 5.31 00:38:53.814 lat (msec): min=11, max=130, avg=24.20, stdev= 5.31 00:38:53.814 clat percentiles (msec): 00:38:53.814 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:38:53.814 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.814 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:38:53.814 | 99.00th=[ 26], 99.50th=[ 27], 99.90th=[ 131], 99.95th=[ 131], 00:38:53.814 | 99.99th=[ 131] 00:38:53.814 bw ( KiB/s): min= 2560, max= 2688, per=4.16%, avg=2649.60, stdev=60.18, samples=20 00:38:53.814 iops : min= 640, max= 672, avg=662.40, stdev=15.05, samples=20 00:38:53.814 lat (msec) : 20=0.51%, 50=99.25%, 250=0.24% 00:38:53.814 cpu : usr=98.92%, sys=0.62%, ctx=69, majf=0, minf=27 00:38:53.814 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:53.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.814 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.814 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.814 filename1: (groupid=0, jobs=1): err= 0: pid=646561: Tue Nov 19 09:56:39 2024 00:38:53.814 read: IOPS=655, BW=2620KiB/s (2683kB/s)(25.8MiB/10087msec) 00:38:53.814 slat (nsec): min=5608, max=83826, avg=22208.64, stdev=13489.51 00:38:53.814 clat (msec): min=12, max=131, avg=24.22, stdev= 5.34 00:38:53.814 lat (msec): min=12, max=131, avg=24.25, stdev= 5.34 00:38:53.814 clat percentiles (msec): 00:38:53.814 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:38:53.814 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.814 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:38:53.814 | 99.00th=[ 26], 99.50th=[ 32], 99.90th=[ 132], 99.95th=[ 132], 00:38:53.814 | 99.99th=[ 132] 00:38:53.814 bw ( KiB/s): min= 2432, max= 2688, per=4.14%, avg=2636.20, stdev=76.70, samples=20 00:38:53.814 iops : min= 608, max= 672, avg=659.00, stdev=19.19, samples=20 00:38:53.814 lat (msec) : 20=0.06%, 50=99.70%, 250=0.24% 00:38:53.814 cpu : usr=99.09%, sys=0.56%, ctx=33, majf=0, minf=45 00:38:53.814 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:53.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.814 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.815 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.815 filename1: (groupid=0, jobs=1): err= 0: pid=646562: Tue Nov 19 09:56:39 2024 00:38:53.815 read: IOPS=657, BW=2628KiB/s (2691kB/s)(25.9MiB/10106msec) 00:38:53.815 slat (nsec): min=5569, max=97312, avg=28174.11, stdev=16464.08 00:38:53.815 clat (msec): min=11, max=131, avg=24.08, stdev= 5.36 00:38:53.815 lat (msec): min=11, max=131, avg=24.11, stdev= 5.36 00:38:53.815 clat percentiles (msec): 00:38:53.815 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:38:53.815 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.815 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:38:53.815 | 99.00th=[ 26], 99.50th=[ 28], 99.90th=[ 132], 99.95th=[ 132], 00:38:53.815 | 99.99th=[ 132] 00:38:53.815 bw ( KiB/s): min= 2560, max= 2688, per=4.16%, avg=2649.60, stdev=60.18, samples=20 00:38:53.815 iops : min= 640, max= 672, avg=662.40, stdev=15.05, samples=20 00:38:53.815 lat (msec) : 20=0.72%, 50=99.04%, 250=0.24% 00:38:53.815 cpu : usr=98.74%, sys=0.82%, ctx=78, majf=0, minf=37 00:38:53.815 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:53.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.815 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.815 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.815 filename1: (groupid=0, jobs=1): err= 0: pid=646563: Tue Nov 19 09:56:39 2024 00:38:53.815 read: IOPS=662, BW=2649KiB/s (2713kB/s)(26.2MiB/10122msec) 00:38:53.815 slat (usec): min=5, max=123, avg=17.87, stdev=15.56 00:38:53.815 clat (msec): min=6, max=131, avg=24.00, stdev= 5.55 00:38:53.815 lat (msec): min=6, max=131, avg=24.02, stdev= 5.55 00:38:53.815 clat percentiles (msec): 00:38:53.815 | 1.00th=[ 12], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:38:53.815 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.815 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:38:53.815 | 99.00th=[ 26], 99.50th=[ 27], 99.90th=[ 131], 99.95th=[ 131], 00:38:53.815 | 99.99th=[ 132] 00:38:53.815 bw ( KiB/s): min= 2560, max= 3072, per=4.20%, avg=2675.20, stdev=109.09, samples=20 00:38:53.815 iops : min= 640, max= 768, avg=668.80, stdev=27.27, samples=20 00:38:53.815 lat (msec) : 10=0.72%, 20=1.31%, 50=97.73%, 250=0.24% 00:38:53.815 cpu : usr=99.12%, sys=0.49%, ctx=110, majf=0, minf=49 00:38:53.815 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:53.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.815 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.815 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.815 filename1: (groupid=0, jobs=1): err= 0: pid=646564: Tue Nov 19 09:56:39 2024 00:38:53.815 read: IOPS=668, BW=2674KiB/s (2738kB/s)(26.3MiB/10081msec) 00:38:53.815 slat (usec): min=5, max=110, avg=18.87, stdev=16.01 00:38:53.815 clat (msec): min=10, max=132, avg=23.83, stdev= 5.95 00:38:53.815 lat (msec): min=10, max=132, avg=23.84, stdev= 5.95 00:38:53.815 clat percentiles (msec): 00:38:53.815 | 1.00th=[ 15], 5.00th=[ 18], 10.00th=[ 21], 20.00th=[ 24], 00:38:53.815 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.815 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 29], 00:38:53.815 | 99.00th=[ 32], 99.50th=[ 37], 99.90th=[ 132], 99.95th=[ 132], 00:38:53.815 | 99.99th=[ 133] 00:38:53.815 bw ( KiB/s): min= 2304, max= 2976, per=4.22%, avg=2687.90, stdev=131.47, samples=20 00:38:53.815 iops : min= 576, max= 744, avg=671.90, stdev=32.88, samples=20 00:38:53.815 lat (msec) : 20=9.31%, 50=90.46%, 250=0.24% 00:38:53.815 cpu : usr=98.15%, sys=1.15%, ctx=195, majf=0, minf=47 00:38:53.815 IO depths : 1=1.4%, 2=3.0%, 4=7.5%, 8=73.6%, 16=14.4%, 32=0.0%, >=64=0.0% 00:38:53.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.815 complete : 0=0.0%, 4=90.3%, 8=7.3%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.815 issued rwts: total=6738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.815 filename1: (groupid=0, jobs=1): err= 0: pid=646565: Tue Nov 19 09:56:39 2024 00:38:53.815 read: IOPS=660, BW=2644KiB/s (2707kB/s)(26.1MiB/10119msec) 00:38:53.815 slat (nsec): min=5554, max=99424, avg=14331.10, stdev=14448.93 00:38:53.815 clat (msec): min=7, max=131, avg=24.04, stdev= 4.80 00:38:53.815 lat (msec): min=7, max=131, avg=24.06, stdev= 4.80 00:38:53.815 clat percentiles (msec): 00:38:53.815 | 1.00th=[ 14], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:38:53.815 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.815 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:38:53.815 | 99.00th=[ 26], 99.50th=[ 27], 99.90th=[ 132], 99.95th=[ 132], 00:38:53.815 | 99.99th=[ 132] 00:38:53.815 bw ( KiB/s): min= 2560, max= 2938, per=4.19%, avg=2668.50, stdev=84.86, samples=20 00:38:53.815 iops : min= 640, max= 734, avg=667.10, stdev=21.13, samples=20 00:38:53.815 lat (msec) : 10=0.60%, 20=1.23%, 50=97.94%, 100=0.09%, 250=0.15% 00:38:53.815 cpu : usr=99.05%, sys=0.63%, ctx=32, majf=0, minf=45 00:38:53.815 IO depths : 1=6.0%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:53.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.815 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.815 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.815 filename1: (groupid=0, jobs=1): err= 0: pid=646566: Tue Nov 19 09:56:39 2024 00:38:53.815 read: IOPS=705, BW=2821KiB/s (2888kB/s)(27.9MiB/10122msec) 00:38:53.815 slat (nsec): min=5409, max=64697, avg=9089.09, stdev=6427.19 00:38:53.815 clat (msec): min=9, max=131, avg=22.61, stdev= 6.38 00:38:53.815 lat (msec): min=9, max=131, avg=22.62, stdev= 6.38 00:38:53.815 clat percentiles (msec): 00:38:53.815 | 1.00th=[ 14], 5.00th=[ 15], 10.00th=[ 17], 20.00th=[ 19], 00:38:53.815 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.815 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:38:53.815 | 99.00th=[ 26], 99.50th=[ 33], 99.90th=[ 132], 99.95th=[ 132], 00:38:53.815 | 99.99th=[ 132] 00:38:53.815 bw ( KiB/s): min= 2432, max= 3968, per=4.47%, avg=2848.80, stdev=433.96, samples=20 00:38:53.815 iops : min= 608, max= 992, avg=712.20, stdev=108.49, samples=20 00:38:53.815 lat (msec) : 10=0.08%, 20=21.83%, 50=77.64%, 100=0.22%, 250=0.22% 00:38:53.815 cpu : usr=98.99%, sys=0.71%, ctx=11, majf=0, minf=58 00:38:53.815 IO depths : 1=4.8%, 2=9.6%, 4=20.8%, 8=57.1%, 16=7.7%, 32=0.0%, >=64=0.0% 00:38:53.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.815 complete : 0=0.0%, 4=92.9%, 8=1.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.815 issued rwts: total=7138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.815 filename1: (groupid=0, jobs=1): err= 0: pid=646567: Tue Nov 19 09:56:39 2024 00:38:53.815 read: IOPS=654, BW=2619KiB/s (2682kB/s)(25.8MiB/10079msec) 00:38:53.815 slat (usec): min=5, max=105, avg=17.58, stdev=16.12 00:38:53.815 clat (msec): min=9, max=132, avg=24.33, stdev= 6.21 00:38:53.815 lat (msec): min=9, max=132, avg=24.35, stdev= 6.21 00:38:53.815 clat percentiles (msec): 00:38:53.815 | 1.00th=[ 16], 5.00th=[ 19], 10.00th=[ 21], 20.00th=[ 24], 00:38:53.815 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.815 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 28], 95.00th=[ 31], 00:38:53.815 | 99.00th=[ 37], 99.50th=[ 41], 99.90th=[ 132], 99.95th=[ 132], 00:38:53.815 | 99.99th=[ 133] 00:38:53.815 bw ( KiB/s): min= 2368, max= 2768, per=4.13%, avg=2632.70, stdev=91.93, samples=20 00:38:53.815 iops : min= 592, max= 692, avg=658.10, stdev=22.96, samples=20 00:38:53.815 lat (msec) : 10=0.06%, 20=9.32%, 50=90.38%, 250=0.24% 00:38:53.815 cpu : usr=98.76%, sys=0.81%, ctx=91, majf=0, minf=35 00:38:53.815 IO depths : 1=1.0%, 2=2.3%, 4=6.8%, 8=75.5%, 16=14.4%, 32=0.0%, >=64=0.0% 00:38:53.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.815 complete : 0=0.0%, 4=89.9%, 8=7.4%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.815 issued rwts: total=6600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.815 filename1: (groupid=0, jobs=1): err= 0: pid=646569: Tue Nov 19 09:56:39 2024 00:38:53.815 read: IOPS=656, BW=2625KiB/s (2688kB/s)(25.9MiB/10093msec) 00:38:53.815 slat (usec): min=5, max=109, avg=26.39, stdev=16.33 00:38:53.815 clat (msec): min=15, max=130, avg=24.12, stdev= 5.28 00:38:53.815 lat (msec): min=15, max=130, avg=24.15, stdev= 5.28 00:38:53.815 clat percentiles (msec): 00:38:53.815 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:38:53.815 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.815 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:38:53.816 | 99.00th=[ 26], 99.50th=[ 27], 99.90th=[ 131], 99.95th=[ 131], 00:38:53.816 | 99.99th=[ 131] 00:38:53.816 bw ( KiB/s): min= 2432, max= 2688, per=4.15%, avg=2642.90, stdev=75.52, samples=20 00:38:53.816 iops : min= 608, max= 672, avg=660.70, stdev=18.91, samples=20 00:38:53.816 lat (msec) : 20=0.30%, 50=99.46%, 250=0.24% 00:38:53.816 cpu : usr=98.40%, sys=0.94%, ctx=231, majf=0, minf=32 00:38:53.816 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:53.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.816 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.816 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.816 filename2: (groupid=0, jobs=1): err= 0: pid=646570: Tue Nov 19 09:56:39 2024 00:38:53.816 read: IOPS=673, BW=2696KiB/s (2760kB/s)(26.5MiB/10073msec) 00:38:53.816 slat (usec): min=5, max=106, avg=20.69, stdev=16.27 00:38:53.816 clat (msec): min=8, max=131, avg=23.55, stdev= 5.90 00:38:53.816 lat (msec): min=8, max=131, avg=23.57, stdev= 5.91 00:38:53.816 clat percentiles (msec): 00:38:53.816 | 1.00th=[ 15], 5.00th=[ 18], 10.00th=[ 20], 20.00th=[ 24], 00:38:53.816 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.816 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:38:53.816 | 99.00th=[ 35], 99.50th=[ 41], 99.90th=[ 131], 99.95th=[ 131], 00:38:53.816 | 99.99th=[ 131] 00:38:53.816 bw ( KiB/s): min= 2304, max= 3072, per=4.25%, avg=2707.90, stdev=173.86, samples=20 00:38:53.816 iops : min= 576, max= 768, avg=676.90, stdev=43.48, samples=20 00:38:53.816 lat (msec) : 10=0.18%, 20=9.91%, 50=89.67%, 250=0.24% 00:38:53.816 cpu : usr=98.55%, sys=0.80%, ctx=197, majf=0, minf=28 00:38:53.816 IO depths : 1=3.3%, 2=8.4%, 4=21.3%, 8=57.4%, 16=9.6%, 32=0.0%, >=64=0.0% 00:38:53.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.816 complete : 0=0.0%, 4=93.3%, 8=1.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.816 issued rwts: total=6788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.816 filename2: (groupid=0, jobs=1): err= 0: pid=646571: Tue Nov 19 09:56:39 2024 00:38:53.816 read: IOPS=654, BW=2617KiB/s (2680kB/s)(25.8MiB/10076msec) 00:38:53.816 slat (nsec): min=5578, max=82579, avg=22801.36, stdev=15010.77 00:38:53.816 clat (msec): min=22, max=131, avg=24.22, stdev= 5.42 00:38:53.816 lat (msec): min=22, max=131, avg=24.24, stdev= 5.42 00:38:53.816 clat percentiles (msec): 00:38:53.816 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:38:53.816 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.816 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:38:53.816 | 99.00th=[ 26], 99.50th=[ 27], 99.90th=[ 132], 99.95th=[ 132], 00:38:53.816 | 99.99th=[ 132] 00:38:53.816 bw ( KiB/s): min= 2304, max= 2688, per=4.13%, avg=2629.70, stdev=104.79, samples=20 00:38:53.816 iops : min= 576, max= 672, avg=657.35, stdev=26.16, samples=20 00:38:53.816 lat (msec) : 50=99.76%, 250=0.24% 00:38:53.816 cpu : usr=98.32%, sys=0.97%, ctx=143, majf=0, minf=28 00:38:53.816 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:53.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.816 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.816 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.816 filename2: (groupid=0, jobs=1): err= 0: pid=646572: Tue Nov 19 09:56:39 2024 00:38:53.816 read: IOPS=676, BW=2708KiB/s (2773kB/s)(26.8MiB/10122msec) 00:38:53.816 slat (nsec): min=5562, max=96210, avg=17038.58, stdev=14227.60 00:38:53.816 clat (msec): min=8, max=131, avg=23.49, stdev= 6.02 00:38:53.816 lat (msec): min=8, max=131, avg=23.50, stdev= 6.02 00:38:53.816 clat percentiles (msec): 00:38:53.816 | 1.00th=[ 12], 5.00th=[ 17], 10.00th=[ 21], 20.00th=[ 24], 00:38:53.816 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.816 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:38:53.816 | 99.00th=[ 34], 99.50th=[ 37], 99.90th=[ 131], 99.95th=[ 132], 00:38:53.816 | 99.99th=[ 132] 00:38:53.816 bw ( KiB/s): min= 2560, max= 3072, per=4.29%, avg=2734.40, stdev=142.91, samples=20 00:38:53.816 iops : min= 640, max= 768, avg=683.60, stdev=35.73, samples=20 00:38:53.816 lat (msec) : 10=0.64%, 20=9.18%, 50=89.94%, 250=0.23% 00:38:53.816 cpu : usr=98.95%, sys=0.71%, ctx=80, majf=0, minf=29 00:38:53.816 IO depths : 1=4.9%, 2=10.5%, 4=22.9%, 8=54.1%, 16=7.6%, 32=0.0%, >=64=0.0% 00:38:53.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.816 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.816 issued rwts: total=6852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.816 filename2: (groupid=0, jobs=1): err= 0: pid=646573: Tue Nov 19 09:56:39 2024 00:38:53.816 read: IOPS=657, BW=2629KiB/s (2693kB/s)(25.9MiB/10101msec) 00:38:53.816 slat (nsec): min=5570, max=73344, avg=11488.79, stdev=8085.73 00:38:53.816 clat (msec): min=13, max=104, avg=24.23, stdev= 4.13 00:38:53.816 lat (msec): min=13, max=104, avg=24.24, stdev= 4.13 00:38:53.816 clat percentiles (msec): 00:38:53.816 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:38:53.816 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.816 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:38:53.816 | 99.00th=[ 26], 99.50th=[ 27], 99.90th=[ 105], 99.95th=[ 105], 00:38:53.816 | 99.99th=[ 105] 00:38:53.816 bw ( KiB/s): min= 2560, max= 2688, per=4.16%, avg=2649.60, stdev=60.18, samples=20 00:38:53.816 iops : min= 640, max= 672, avg=662.40, stdev=15.05, samples=20 00:38:53.816 lat (msec) : 20=0.24%, 50=99.52%, 250=0.24% 00:38:53.816 cpu : usr=98.25%, sys=1.18%, ctx=244, majf=0, minf=35 00:38:53.816 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:53.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.816 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.816 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.816 filename2: (groupid=0, jobs=1): err= 0: pid=646574: Tue Nov 19 09:56:39 2024 00:38:53.816 read: IOPS=655, BW=2620KiB/s (2683kB/s)(25.8MiB/10088msec) 00:38:53.816 slat (nsec): min=5511, max=92756, avg=18043.18, stdev=12309.24 00:38:53.816 clat (msec): min=12, max=132, avg=24.28, stdev= 5.33 00:38:53.816 lat (msec): min=12, max=132, avg=24.29, stdev= 5.34 00:38:53.816 clat percentiles (msec): 00:38:53.816 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:38:53.816 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.816 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:38:53.816 | 99.00th=[ 26], 99.50th=[ 32], 99.90th=[ 132], 99.95th=[ 132], 00:38:53.816 | 99.99th=[ 133] 00:38:53.816 bw ( KiB/s): min= 2432, max= 2688, per=4.14%, avg=2636.20, stdev=76.70, samples=20 00:38:53.816 iops : min= 608, max= 672, avg=659.00, stdev=19.19, samples=20 00:38:53.816 lat (msec) : 20=0.03%, 50=99.73%, 250=0.24% 00:38:53.816 cpu : usr=99.03%, sys=0.67%, ctx=14, majf=0, minf=33 00:38:53.816 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:53.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.816 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.816 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.816 filename2: (groupid=0, jobs=1): err= 0: pid=646575: Tue Nov 19 09:56:39 2024 00:38:53.816 read: IOPS=653, BW=2615KiB/s (2678kB/s)(25.7MiB/10076msec) 00:38:53.816 slat (nsec): min=5687, max=88984, avg=19991.33, stdev=11115.40 00:38:53.816 clat (msec): min=10, max=131, avg=24.31, stdev= 5.61 00:38:53.816 lat (msec): min=10, max=131, avg=24.33, stdev= 5.61 00:38:53.816 clat percentiles (msec): 00:38:53.816 | 1.00th=[ 20], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:38:53.816 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.816 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:38:53.816 | 99.00th=[ 35], 99.50th=[ 40], 99.90th=[ 132], 99.95th=[ 132], 00:38:53.816 | 99.99th=[ 132] 00:38:53.816 bw ( KiB/s): min= 2304, max= 2720, per=4.13%, avg=2628.10, stdev=105.73, samples=20 00:38:53.817 iops : min= 576, max= 680, avg=656.95, stdev=26.43, samples=20 00:38:53.817 lat (msec) : 20=1.72%, 50=98.04%, 250=0.24% 00:38:53.817 cpu : usr=98.83%, sys=0.81%, ctx=114, majf=0, minf=54 00:38:53.817 IO depths : 1=3.9%, 2=9.1%, 4=21.3%, 8=56.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:38:53.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.817 complete : 0=0.0%, 4=93.6%, 8=1.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.817 issued rwts: total=6588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.817 filename2: (groupid=0, jobs=1): err= 0: pid=646576: Tue Nov 19 09:56:39 2024 00:38:53.817 read: IOPS=659, BW=2639KiB/s (2702kB/s)(26.1MiB/10113msec) 00:38:53.817 slat (usec): min=5, max=102, avg=18.75, stdev=15.96 00:38:53.817 clat (msec): min=6, max=130, avg=24.10, stdev= 5.44 00:38:53.817 lat (msec): min=6, max=130, avg=24.12, stdev= 5.44 00:38:53.817 clat percentiles (msec): 00:38:53.817 | 1.00th=[ 16], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:38:53.817 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.817 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:38:53.817 | 99.00th=[ 26], 99.50th=[ 28], 99.90th=[ 131], 99.95th=[ 131], 00:38:53.817 | 99.99th=[ 131] 00:38:53.817 bw ( KiB/s): min= 2560, max= 2949, per=4.18%, avg=2662.65, stdev=89.90, samples=20 00:38:53.817 iops : min= 640, max= 737, avg=665.65, stdev=22.43, samples=20 00:38:53.817 lat (msec) : 10=0.45%, 20=0.90%, 50=98.41%, 250=0.24% 00:38:53.817 cpu : usr=98.69%, sys=0.83%, ctx=104, majf=0, minf=28 00:38:53.817 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:53.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.817 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.817 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.817 filename2: (groupid=0, jobs=1): err= 0: pid=646577: Tue Nov 19 09:56:39 2024 00:38:53.817 read: IOPS=658, BW=2635KiB/s (2698kB/s)(26.0MiB/10113msec) 00:38:53.817 slat (nsec): min=5710, max=92959, avg=23375.21, stdev=11992.97 00:38:53.817 clat (msec): min=10, max=130, avg=24.08, stdev= 5.34 00:38:53.817 lat (msec): min=10, max=130, avg=24.10, stdev= 5.34 00:38:53.817 clat percentiles (msec): 00:38:53.817 | 1.00th=[ 19], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:38:53.817 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:53.817 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:38:53.817 | 99.00th=[ 26], 99.50th=[ 27], 99.90th=[ 131], 99.95th=[ 131], 00:38:53.817 | 99.99th=[ 131] 00:38:53.817 bw ( KiB/s): min= 2560, max= 2869, per=4.17%, avg=2658.65, stdev=77.40, samples=20 00:38:53.817 iops : min= 640, max= 717, avg=664.65, stdev=19.32, samples=20 00:38:53.817 lat (msec) : 20=1.34%, 50=98.42%, 250=0.24% 00:38:53.817 cpu : usr=98.58%, sys=0.96%, ctx=77, majf=0, minf=27 00:38:53.817 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:53.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.817 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.817 issued rwts: total=6662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:53.817 00:38:53.817 Run status group 0 (all jobs): 00:38:53.817 READ: bw=62.2MiB/s (65.2MB/s), 2583KiB/s-2871KiB/s (2644kB/s-2940kB/s), io=630MiB (660MB), run=10008-10122msec 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.817 bdev_null0 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.817 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.818 [2024-11-19 09:56:39.366907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.818 bdev_null1 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:53.818 { 00:38:53.818 "params": { 00:38:53.818 "name": "Nvme$subsystem", 00:38:53.818 "trtype": "$TEST_TRANSPORT", 00:38:53.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:53.818 "adrfam": "ipv4", 00:38:53.818 "trsvcid": "$NVMF_PORT", 00:38:53.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:53.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:53.818 "hdgst": ${hdgst:-false}, 00:38:53.818 "ddgst": ${ddgst:-false} 00:38:53.818 }, 00:38:53.818 "method": "bdev_nvme_attach_controller" 00:38:53.818 } 00:38:53.818 EOF 00:38:53.818 )") 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:53.818 { 00:38:53.818 "params": { 00:38:53.818 "name": "Nvme$subsystem", 00:38:53.818 "trtype": "$TEST_TRANSPORT", 00:38:53.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:53.818 "adrfam": "ipv4", 00:38:53.818 "trsvcid": "$NVMF_PORT", 00:38:53.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:53.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:53.818 "hdgst": ${hdgst:-false}, 00:38:53.818 "ddgst": ${ddgst:-false} 00:38:53.818 }, 00:38:53.818 "method": "bdev_nvme_attach_controller" 00:38:53.818 } 00:38:53.818 EOF 00:38:53.818 )") 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:53.818 "params": { 00:38:53.818 "name": "Nvme0", 00:38:53.818 "trtype": "tcp", 00:38:53.818 "traddr": "10.0.0.2", 00:38:53.818 "adrfam": "ipv4", 00:38:53.818 "trsvcid": "4420", 00:38:53.818 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:53.818 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:53.818 "hdgst": false, 00:38:53.818 "ddgst": false 00:38:53.818 }, 00:38:53.818 "method": "bdev_nvme_attach_controller" 00:38:53.818 },{ 00:38:53.818 "params": { 00:38:53.818 "name": "Nvme1", 00:38:53.818 "trtype": "tcp", 00:38:53.818 "traddr": "10.0.0.2", 00:38:53.818 "adrfam": "ipv4", 00:38:53.818 "trsvcid": "4420", 00:38:53.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:53.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:53.818 "hdgst": false, 00:38:53.818 "ddgst": false 00:38:53.818 }, 00:38:53.818 "method": "bdev_nvme_attach_controller" 00:38:53.818 }' 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:53.818 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:53.819 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:53.819 09:56:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:53.819 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:53.819 ... 00:38:53.819 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:53.819 ... 00:38:53.819 fio-3.35 00:38:53.819 Starting 4 threads 00:38:59.109 00:38:59.109 filename0: (groupid=0, jobs=1): err= 0: pid=648931: Tue Nov 19 09:56:45 2024 00:38:59.109 read: IOPS=2972, BW=23.2MiB/s (24.3MB/s)(116MiB/5002msec) 00:38:59.109 slat (nsec): min=5402, max=76550, avg=9115.22, stdev=3459.49 00:38:59.109 clat (usec): min=898, max=5181, avg=2666.99, stdev=232.80 00:38:59.109 lat (usec): min=910, max=5214, avg=2676.11, stdev=232.72 00:38:59.109 clat percentiles (usec): 00:38:59.109 | 1.00th=[ 1975], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2638], 00:38:59.109 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:38:59.109 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 2933], 00:38:59.109 | 99.00th=[ 3556], 99.50th=[ 3916], 99.90th=[ 4686], 99.95th=[ 4883], 00:38:59.109 | 99.99th=[ 5080] 00:38:59.109 bw ( KiB/s): min=23616, max=24016, per=25.11%, avg=23779.56, stdev=129.93, samples=9 00:38:59.109 iops : min= 2952, max= 3002, avg=2972.44, stdev=16.24, samples=9 00:38:59.109 lat (usec) : 1000=0.02% 00:38:59.109 lat (msec) : 2=1.32%, 4=98.28%, 10=0.38% 00:38:59.109 cpu : usr=97.26%, sys=2.48%, ctx=6, majf=0, minf=57 00:38:59.109 IO depths : 1=0.1%, 2=0.3%, 4=72.2%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:59.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.110 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.110 issued rwts: total=14866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:59.110 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:59.110 filename0: (groupid=0, jobs=1): err= 0: pid=648932: Tue Nov 19 09:56:45 2024 00:38:59.110 read: IOPS=2965, BW=23.2MiB/s (24.3MB/s)(116MiB/5001msec) 00:38:59.110 slat (nsec): min=7877, max=74836, avg=9270.33, stdev=3496.79 00:38:59.110 clat (usec): min=1287, max=5533, avg=2673.78, stdev=200.02 00:38:59.110 lat (usec): min=1295, max=5566, avg=2683.05, stdev=200.01 00:38:59.110 clat percentiles (usec): 00:38:59.110 | 1.00th=[ 2024], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2638], 00:38:59.110 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:38:59.110 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 2933], 00:38:59.110 | 99.00th=[ 3458], 99.50th=[ 3785], 99.90th=[ 4293], 99.95th=[ 4621], 00:38:59.110 | 99.99th=[ 5473] 00:38:59.110 bw ( KiB/s): min=23520, max=24000, per=25.04%, avg=23704.89, stdev=138.82, samples=9 00:38:59.110 iops : min= 2940, max= 3000, avg=2963.11, stdev=17.35, samples=9 00:38:59.110 lat (msec) : 2=0.80%, 4=99.00%, 10=0.21% 00:38:59.110 cpu : usr=96.56%, sys=3.18%, ctx=6, majf=0, minf=61 00:38:59.110 IO depths : 1=0.1%, 2=0.2%, 4=71.5%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:59.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.110 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.110 issued rwts: total=14828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:59.110 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:59.110 filename1: (groupid=0, jobs=1): err= 0: pid=648933: Tue Nov 19 09:56:45 2024 00:38:59.110 read: IOPS=2962, BW=23.1MiB/s (24.3MB/s)(116MiB/5002msec) 00:38:59.110 slat (nsec): min=5441, max=47181, avg=8475.47, stdev=3431.69 00:38:59.110 clat (usec): min=1027, max=5803, avg=2677.87, stdev=224.71 00:38:59.110 lat (usec): min=1044, max=5839, avg=2686.35, stdev=224.62 00:38:59.110 clat percentiles (usec): 00:38:59.110 | 1.00th=[ 1991], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2638], 00:38:59.110 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:38:59.110 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 2933], 00:38:59.110 | 99.00th=[ 3458], 99.50th=[ 3884], 99.90th=[ 5014], 99.95th=[ 5538], 00:38:59.110 | 99.99th=[ 5604] 00:38:59.110 bw ( KiB/s): min=23552, max=24192, per=25.05%, avg=23713.78, stdev=188.54, samples=9 00:38:59.110 iops : min= 2944, max= 3024, avg=2964.22, stdev=23.57, samples=9 00:38:59.110 lat (msec) : 2=1.03%, 4=98.54%, 10=0.44% 00:38:59.110 cpu : usr=96.20%, sys=3.52%, ctx=30, majf=0, minf=31 00:38:59.110 IO depths : 1=0.1%, 2=0.2%, 4=71.0%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:59.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.110 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.110 issued rwts: total=14820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:59.110 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:59.110 filename1: (groupid=0, jobs=1): err= 0: pid=648934: Tue Nov 19 09:56:45 2024 00:38:59.110 read: IOPS=2936, BW=22.9MiB/s (24.1MB/s)(115MiB/5001msec) 00:38:59.110 slat (nsec): min=7889, max=48126, avg=9309.50, stdev=3657.93 00:38:59.110 clat (usec): min=1107, max=45773, avg=2699.06, stdev=1023.36 00:38:59.110 lat (usec): min=1115, max=45808, avg=2708.37, stdev=1023.52 00:38:59.110 clat percentiles (usec): 00:38:59.110 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2638], 00:38:59.110 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:38:59.110 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2802], 95.00th=[ 2933], 00:38:59.110 | 99.00th=[ 3458], 99.50th=[ 3818], 99.90th=[ 4686], 99.95th=[45876], 00:38:59.110 | 99.99th=[45876] 00:38:59.110 bw ( KiB/s): min=21595, max=23808, per=24.79%, avg=23469.67, stdev=706.50, samples=9 00:38:59.110 iops : min= 2699, max= 2976, avg=2933.67, stdev=88.44, samples=9 00:38:59.110 lat (msec) : 2=0.67%, 4=99.10%, 10=0.18%, 50=0.05% 00:38:59.110 cpu : usr=97.06%, sys=2.66%, ctx=6, majf=0, minf=45 00:38:59.110 IO depths : 1=0.1%, 2=0.2%, 4=73.0%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:59.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.110 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:59.110 issued rwts: total=14685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:59.110 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:59.110 00:38:59.110 Run status group 0 (all jobs): 00:38:59.110 READ: bw=92.5MiB/s (97.0MB/s), 22.9MiB/s-23.2MiB/s (24.1MB/s-24.3MB/s), io=462MiB (485MB), run=5001-5002msec 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.110 00:38:59.110 real 0m24.598s 00:38:59.110 user 5m26.613s 00:38:59.110 sys 0m4.300s 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:59.110 09:56:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.110 ************************************ 00:38:59.110 END TEST fio_dif_rand_params 00:38:59.110 ************************************ 00:38:59.110 09:56:45 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:59.110 09:56:45 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:59.110 09:56:45 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:59.110 09:56:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:59.372 ************************************ 00:38:59.372 START TEST fio_dif_digest 00:38:59.372 ************************************ 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:59.372 bdev_null0 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:59.372 [2024-11-19 09:56:45.921629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:59.372 09:56:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:59.373 { 00:38:59.373 "params": { 00:38:59.373 "name": "Nvme$subsystem", 00:38:59.373 "trtype": "$TEST_TRANSPORT", 00:38:59.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:59.373 "adrfam": "ipv4", 00:38:59.373 "trsvcid": "$NVMF_PORT", 00:38:59.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:59.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:59.373 "hdgst": ${hdgst:-false}, 00:38:59.373 "ddgst": ${ddgst:-false} 00:38:59.373 }, 00:38:59.373 "method": "bdev_nvme_attach_controller" 00:38:59.373 } 00:38:59.373 EOF 00:38:59.373 )") 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:59.373 "params": { 00:38:59.373 "name": "Nvme0", 00:38:59.373 "trtype": "tcp", 00:38:59.373 "traddr": "10.0.0.2", 00:38:59.373 "adrfam": "ipv4", 00:38:59.373 "trsvcid": "4420", 00:38:59.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:59.373 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:59.373 "hdgst": true, 00:38:59.373 "ddgst": true 00:38:59.373 }, 00:38:59.373 "method": "bdev_nvme_attach_controller" 00:38:59.373 }' 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:59.373 09:56:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:59.373 09:56:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:59.373 09:56:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:59.373 09:56:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:59.373 09:56:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:59.635 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:59.635 ... 00:38:59.635 fio-3.35 00:38:59.635 Starting 3 threads 00:39:11.873 00:39:11.873 filename0: (groupid=0, jobs=1): err= 0: pid=650280: Tue Nov 19 09:56:56 2024 00:39:11.873 read: IOPS=364, BW=45.5MiB/s (47.8MB/s)(458MiB/10046msec) 00:39:11.873 slat (nsec): min=5781, max=31058, avg=8073.31, stdev=1517.95 00:39:11.873 clat (usec): min=5639, max=49940, avg=8213.66, stdev=1560.67 00:39:11.873 lat (usec): min=5645, max=49946, avg=8221.74, stdev=1560.63 00:39:11.873 clat percentiles (usec): 00:39:11.873 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6980], 00:39:11.873 | 30.00th=[ 7242], 40.00th=[ 7570], 50.00th=[ 8094], 60.00th=[ 8717], 00:39:11.873 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10159], 00:39:11.873 | 99.00th=[10814], 99.50th=[11076], 99.90th=[11731], 99.95th=[47449], 00:39:11.873 | 99.99th=[50070] 00:39:11.873 bw ( KiB/s): min=44800, max=48640, per=42.58%, avg=46822.40, stdev=1147.58, samples=20 00:39:11.873 iops : min= 350, max= 380, avg=365.80, stdev= 8.97, samples=20 00:39:11.873 lat (msec) : 10=92.81%, 20=7.13%, 50=0.05% 00:39:11.873 cpu : usr=94.18%, sys=5.59%, ctx=18, majf=0, minf=187 00:39:11.873 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:11.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.873 issued rwts: total=3660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.873 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:11.873 filename0: (groupid=0, jobs=1): err= 0: pid=650281: Tue Nov 19 09:56:56 2024 00:39:11.873 read: IOPS=335, BW=41.9MiB/s (43.9MB/s)(421MiB/10044msec) 00:39:11.873 slat (nsec): min=5826, max=31696, avg=7922.97, stdev=1615.10 00:39:11.873 clat (usec): min=5634, max=49945, avg=8932.30, stdev=1570.57 00:39:11.873 lat (usec): min=5644, max=49952, avg=8940.23, stdev=1570.53 00:39:11.873 clat percentiles (usec): 00:39:11.873 | 1.00th=[ 6587], 5.00th=[ 7111], 10.00th=[ 7373], 20.00th=[ 7701], 00:39:11.873 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9372], 00:39:11.873 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10552], 95.00th=[10945], 00:39:11.873 | 99.00th=[11600], 99.50th=[11731], 99.90th=[12125], 99.95th=[46400], 00:39:11.873 | 99.99th=[50070] 00:39:11.873 bw ( KiB/s): min=41216, max=45312, per=39.14%, avg=43046.40, stdev=1086.99, samples=20 00:39:11.873 iops : min= 322, max= 354, avg=336.30, stdev= 8.49, samples=20 00:39:11.873 lat (msec) : 10=77.30%, 20=22.64%, 50=0.06% 00:39:11.873 cpu : usr=94.75%, sys=5.01%, ctx=19, majf=0, minf=98 00:39:11.873 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:11.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.873 issued rwts: total=3365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.873 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:11.873 filename0: (groupid=0, jobs=1): err= 0: pid=650282: Tue Nov 19 09:56:56 2024 00:39:11.873 read: IOPS=159, BW=20.0MiB/s (21.0MB/s)(201MiB/10046msec) 00:39:11.873 slat (nsec): min=6013, max=30857, avg=7941.31, stdev=1841.48 00:39:11.873 clat (usec): min=8271, max=93157, avg=18728.42, stdev=17203.44 00:39:11.873 lat (usec): min=8277, max=93163, avg=18736.36, stdev=17203.48 00:39:11.873 clat percentiles (usec): 00:39:11.873 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9896], 00:39:11.873 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:39:11.873 | 70.00th=[11469], 80.00th=[12649], 90.00th=[51119], 95.00th=[52167], 00:39:11.873 | 99.00th=[53216], 99.50th=[91751], 99.90th=[92799], 99.95th=[92799], 00:39:11.873 | 99.99th=[92799] 00:39:11.873 bw ( KiB/s): min=15872, max=27392, per=18.67%, avg=20531.20, stdev=3080.52, samples=20 00:39:11.873 iops : min= 124, max= 214, avg=160.40, stdev=24.07, samples=20 00:39:11.873 lat (msec) : 10=22.91%, 20=57.66%, 50=2.74%, 100=16.69% 00:39:11.873 cpu : usr=95.74%, sys=4.04%, ctx=16, majf=0, minf=108 00:39:11.873 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:11.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.873 issued rwts: total=1606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.873 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:11.873 00:39:11.873 Run status group 0 (all jobs): 00:39:11.873 READ: bw=107MiB/s (113MB/s), 20.0MiB/s-45.5MiB/s (21.0MB/s-47.8MB/s), io=1079MiB (1131MB), run=10044-10046msec 00:39:11.873 09:56:56 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:11.873 09:56:56 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:11.873 09:56:56 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:11.874 09:56:56 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:11.874 09:56:56 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:11.874 09:56:56 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:11.874 09:56:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.874 09:56:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:11.874 09:56:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.874 09:56:56 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:11.874 09:56:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.874 09:56:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:11.874 09:56:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.874 00:39:11.874 real 0m11.089s 00:39:11.874 user 0m41.076s 00:39:11.874 sys 0m1.856s 00:39:11.874 09:56:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:11.874 09:56:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:11.874 ************************************ 00:39:11.874 END TEST fio_dif_digest 00:39:11.874 ************************************ 00:39:11.874 09:56:57 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:11.874 09:56:57 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:11.874 09:56:57 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:11.874 09:56:57 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:39:11.874 09:56:57 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:11.874 09:56:57 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:39:11.874 09:56:57 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:11.874 09:56:57 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:11.874 rmmod nvme_tcp 00:39:11.874 rmmod nvme_fabrics 00:39:11.874 rmmod nvme_keyring 00:39:11.874 09:56:57 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:11.874 09:56:57 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:39:11.874 09:56:57 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:39:11.874 09:56:57 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 639984 ']' 00:39:11.874 09:56:57 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 639984 00:39:11.874 09:56:57 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 639984 ']' 00:39:11.874 09:56:57 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 639984 00:39:11.874 09:56:57 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:39:11.874 09:56:57 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:11.874 09:56:57 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 639984 00:39:11.874 09:56:57 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:11.874 09:56:57 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:11.874 09:56:57 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 639984' 00:39:11.874 killing process with pid 639984 00:39:11.874 09:56:57 nvmf_dif -- common/autotest_common.sh@973 -- # kill 639984 00:39:11.874 09:56:57 nvmf_dif -- common/autotest_common.sh@978 -- # wait 639984 00:39:11.874 09:56:57 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:11.874 09:56:57 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:13.937 Waiting for block devices as requested 00:39:13.937 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:14.212 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:14.212 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:14.212 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:14.504 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:14.504 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:14.504 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:14.504 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:14.817 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:14.817 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:15.123 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:15.123 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:15.123 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:15.123 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:15.387 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:15.387 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:15.387 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:15.649 09:57:02 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:15.649 09:57:02 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:15.649 09:57:02 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:39:15.649 09:57:02 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:39:15.649 09:57:02 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:15.649 09:57:02 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:39:15.649 09:57:02 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:15.649 09:57:02 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:15.649 09:57:02 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:15.649 09:57:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:15.649 09:57:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:18.198 09:57:04 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:18.198 00:39:18.198 real 1m18.806s 00:39:18.198 user 8m11.044s 00:39:18.198 sys 0m21.916s 00:39:18.198 09:57:04 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:18.198 09:57:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:18.198 ************************************ 00:39:18.198 END TEST nvmf_dif 00:39:18.198 ************************************ 00:39:18.198 09:57:04 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:18.198 09:57:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:18.198 09:57:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:18.198 09:57:04 -- common/autotest_common.sh@10 -- # set +x 00:39:18.198 ************************************ 00:39:18.198 START TEST nvmf_abort_qd_sizes 00:39:18.198 ************************************ 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:18.198 * Looking for test storage... 00:39:18.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:18.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.198 --rc genhtml_branch_coverage=1 00:39:18.198 --rc genhtml_function_coverage=1 00:39:18.198 --rc genhtml_legend=1 00:39:18.198 --rc geninfo_all_blocks=1 00:39:18.198 --rc geninfo_unexecuted_blocks=1 00:39:18.198 00:39:18.198 ' 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:18.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.198 --rc genhtml_branch_coverage=1 00:39:18.198 --rc genhtml_function_coverage=1 00:39:18.198 --rc genhtml_legend=1 00:39:18.198 --rc geninfo_all_blocks=1 00:39:18.198 --rc geninfo_unexecuted_blocks=1 00:39:18.198 00:39:18.198 ' 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:18.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.198 --rc genhtml_branch_coverage=1 00:39:18.198 --rc genhtml_function_coverage=1 00:39:18.198 --rc genhtml_legend=1 00:39:18.198 --rc geninfo_all_blocks=1 00:39:18.198 --rc geninfo_unexecuted_blocks=1 00:39:18.198 00:39:18.198 ' 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:18.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.198 --rc genhtml_branch_coverage=1 00:39:18.198 --rc genhtml_function_coverage=1 00:39:18.198 --rc genhtml_legend=1 00:39:18.198 --rc geninfo_all_blocks=1 00:39:18.198 --rc geninfo_unexecuted_blocks=1 00:39:18.198 00:39:18.198 ' 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:18.198 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:18.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:39:18.199 09:57:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:26.342 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:26.342 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:26.342 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:26.342 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:26.342 09:57:11 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:26.342 09:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:26.342 09:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:26.342 09:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:26.342 09:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:26.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:26.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:39:26.342 00:39:26.342 --- 10.0.0.2 ping statistics --- 00:39:26.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:26.342 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:39:26.342 09:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:26.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:26.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:39:26.342 00:39:26.342 --- 10.0.0.1 ping statistics --- 00:39:26.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:26.342 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:39:26.342 09:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:26.342 09:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:39:26.342 09:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:39:26.342 09:57:12 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:28.890 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:28.890 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:28.890 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:28.890 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:28.890 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:28.890 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:28.890 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:29.150 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:29.150 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:29.150 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:29.150 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:29.150 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:29.150 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:29.150 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:29.150 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:29.150 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:29.150 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:29.411 09:57:16 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:29.411 09:57:16 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:29.411 09:57:16 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:29.411 09:57:16 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:29.411 09:57:16 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:29.411 09:57:16 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:29.672 09:57:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:29.672 09:57:16 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:29.672 09:57:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:29.672 09:57:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:29.672 09:57:16 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=660372 00:39:29.672 09:57:16 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 660372 00:39:29.672 09:57:16 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:29.672 09:57:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 660372 ']' 00:39:29.673 09:57:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:29.673 09:57:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:29.673 09:57:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:29.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:29.673 09:57:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:29.673 09:57:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:29.673 [2024-11-19 09:57:16.266937] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:39:29.673 [2024-11-19 09:57:16.266998] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:29.673 [2024-11-19 09:57:16.366944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:29.933 [2024-11-19 09:57:16.421854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:29.933 [2024-11-19 09:57:16.421907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:29.933 [2024-11-19 09:57:16.421916] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:29.933 [2024-11-19 09:57:16.421923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:29.933 [2024-11-19 09:57:16.421929] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:29.933 [2024-11-19 09:57:16.424317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:29.933 [2024-11-19 09:57:16.424480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:29.933 [2024-11-19 09:57:16.424608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:29.933 [2024-11-19 09:57:16.424609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:30.504 09:57:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:30.504 09:57:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:39:30.504 09:57:17 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:30.504 09:57:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:30.504 09:57:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:30.504 09:57:17 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:30.505 09:57:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:30.505 ************************************ 00:39:30.505 START TEST spdk_target_abort 00:39:30.505 ************************************ 00:39:30.505 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:39:30.505 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:30.505 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:39:30.505 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.505 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:30.766 spdk_targetn1 00:39:30.766 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.766 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:30.766 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.766 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:30.766 [2024-11-19 09:57:17.479830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:30.766 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.766 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:30.766 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.766 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:30.766 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.766 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:30.766 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.766 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:31.027 [2024-11-19 09:57:17.544183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:31.027 09:57:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:31.288 [2024-11-19 09:57:17.815825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1328 len:8 PRP1 0x200004abe000 PRP2 0x0 00:39:31.288 [2024-11-19 09:57:17.815863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00a7 p:1 m:0 dnr:0 00:39:31.288 [2024-11-19 09:57:17.847410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2352 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:39:31.288 [2024-11-19 09:57:17.847433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:31.288 [2024-11-19 09:57:17.893654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3872 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:39:31.288 [2024-11-19 09:57:17.893676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00e5 p:0 m:0 dnr:0 00:39:31.288 [2024-11-19 09:57:17.895398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3960 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:39:31.288 [2024-11-19 09:57:17.895416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00f1 p:0 m:0 dnr:0 00:39:34.591 Initializing NVMe Controllers 00:39:34.591 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:34.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:34.591 Initialization complete. Launching workers. 00:39:34.591 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11098, failed: 4 00:39:34.591 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2625, failed to submit 8477 00:39:34.591 success 756, unsuccessful 1869, failed 0 00:39:34.591 09:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:34.591 09:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:34.591 [2024-11-19 09:57:20.984425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:1056 len:8 PRP1 0x200004e44000 PRP2 0x0 00:39:34.591 [2024-11-19 09:57:20.984467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:0088 p:1 m:0 dnr:0 00:39:34.591 [2024-11-19 09:57:21.000346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:1384 len:8 PRP1 0x200004e56000 PRP2 0x0 00:39:34.591 [2024-11-19 09:57:21.000371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:00b9 p:1 m:0 dnr:0 00:39:34.591 [2024-11-19 09:57:21.263259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:7520 len:8 PRP1 0x200004e56000 PRP2 0x0 00:39:34.591 [2024-11-19 09:57:21.263288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:00b0 p:0 m:0 dnr:0 00:39:37.896 Initializing NVMe Controllers 00:39:37.896 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:37.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:37.896 Initialization complete. Launching workers. 00:39:37.896 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8591, failed: 3 00:39:37.896 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1209, failed to submit 7385 00:39:37.896 success 318, unsuccessful 891, failed 0 00:39:37.896 09:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:37.896 09:57:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:37.896 [2024-11-19 09:57:24.271537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:158 nsid:1 lba:7320 len:8 PRP1 0x200004b04000 PRP2 0x0 00:39:37.896 [2024-11-19 09:57:24.271568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:158 cdw0:0 sqhd:00f6 p:1 m:0 dnr:0 00:39:38.838 [2024-11-19 09:57:25.306633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:183 nsid:1 lba:128232 len:8 PRP1 0x200004b28000 PRP2 0x0 00:39:38.838 [2024-11-19 09:57:25.306656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:183 cdw0:0 sqhd:00fc p:0 m:0 dnr:0 00:39:39.780 [2024-11-19 09:57:26.243319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:169 nsid:1 lba:236960 len:8 PRP1 0x200004b24000 PRP2 0x0 00:39:39.780 [2024-11-19 09:57:26.243341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:169 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:39:40.723 Initializing NVMe Controllers 00:39:40.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:40.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:40.723 Initialization complete. Launching workers. 00:39:40.723 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43805, failed: 3 00:39:40.723 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2622, failed to submit 41186 00:39:40.723 success 593, unsuccessful 2029, failed 0 00:39:40.723 09:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:40.723 09:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.723 09:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:40.723 09:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.723 09:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:40.723 09:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.723 09:57:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 660372 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 660372 ']' 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 660372 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 660372 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 660372' 00:39:42.642 killing process with pid 660372 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 660372 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 660372 00:39:42.642 00:39:42.642 real 0m12.096s 00:39:42.642 user 0m49.195s 00:39:42.642 sys 0m2.046s 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:42.642 ************************************ 00:39:42.642 END TEST spdk_target_abort 00:39:42.642 ************************************ 00:39:42.642 09:57:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:42.642 09:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:42.642 09:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:42.642 09:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:42.642 ************************************ 00:39:42.642 START TEST kernel_target_abort 00:39:42.642 ************************************ 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:39:42.642 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:39:42.906 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:42.906 09:57:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:46.205 Waiting for block devices as requested 00:39:46.205 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:46.205 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:46.205 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:46.205 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:46.466 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:46.466 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:46.466 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:46.727 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:46.727 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:46.987 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:46.987 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:46.987 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:47.248 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:47.248 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:47.248 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:47.248 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:47.509 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:47.770 No valid GPT data, bailing 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:39:47.770 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:39:47.771 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:39:47.771 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:39:47.771 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:39:47.771 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:47.771 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:39:48.031 00:39:48.031 Discovery Log Number of Records 2, Generation counter 2 00:39:48.031 =====Discovery Log Entry 0====== 00:39:48.031 trtype: tcp 00:39:48.031 adrfam: ipv4 00:39:48.031 subtype: current discovery subsystem 00:39:48.031 treq: not specified, sq flow control disable supported 00:39:48.031 portid: 1 00:39:48.031 trsvcid: 4420 00:39:48.031 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:48.031 traddr: 10.0.0.1 00:39:48.031 eflags: none 00:39:48.031 sectype: none 00:39:48.031 =====Discovery Log Entry 1====== 00:39:48.031 trtype: tcp 00:39:48.031 adrfam: ipv4 00:39:48.031 subtype: nvme subsystem 00:39:48.031 treq: not specified, sq flow control disable supported 00:39:48.031 portid: 1 00:39:48.031 trsvcid: 4420 00:39:48.031 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:48.031 traddr: 10.0.0.1 00:39:48.031 eflags: none 00:39:48.031 sectype: none 00:39:48.031 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:48.031 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:48.031 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:48.031 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:48.031 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:48.031 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:48.031 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:48.031 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:48.031 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:48.031 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:48.031 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:48.031 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:48.031 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:48.031 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:48.031 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:48.031 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:48.032 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:48.032 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:48.032 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:48.032 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:48.032 09:57:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:51.332 Initializing NVMe Controllers 00:39:51.332 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:51.332 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:51.332 Initialization complete. Launching workers. 00:39:51.332 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68012, failed: 0 00:39:51.332 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 68012, failed to submit 0 00:39:51.332 success 0, unsuccessful 68012, failed 0 00:39:51.332 09:57:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:51.332 09:57:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:54.627 Initializing NVMe Controllers 00:39:54.627 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:54.627 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:54.627 Initialization complete. Launching workers. 00:39:54.627 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 117293, failed: 0 00:39:54.627 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29526, failed to submit 87767 00:39:54.627 success 0, unsuccessful 29526, failed 0 00:39:54.627 09:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:54.627 09:57:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:57.170 Initializing NVMe Controllers 00:39:57.170 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:57.170 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:57.170 Initialization complete. Launching workers. 00:39:57.170 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145662, failed: 0 00:39:57.170 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36454, failed to submit 109208 00:39:57.170 success 0, unsuccessful 36454, failed 0 00:39:57.170 09:57:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:57.170 09:57:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:57.170 09:57:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:39:57.430 09:57:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:57.430 09:57:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:57.430 09:57:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:57.430 09:57:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:57.430 09:57:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:39:57.430 09:57:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:39:57.430 09:57:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:00.726 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:00.726 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:00.726 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:00.726 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:00.726 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:00.726 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:00.726 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:00.726 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:00.726 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:00.726 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:00.726 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:00.726 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:00.726 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:00.726 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:00.985 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:00.985 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:02.894 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:40:02.894 00:40:02.894 real 0m20.218s 00:40:02.894 user 0m9.926s 00:40:02.894 sys 0m5.980s 00:40:02.894 09:57:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:02.894 09:57:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:02.894 ************************************ 00:40:02.894 END TEST kernel_target_abort 00:40:02.894 ************************************ 00:40:02.894 09:57:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:40:02.894 09:57:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:40:02.894 09:57:49 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:02.894 09:57:49 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:40:02.894 09:57:49 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:02.894 09:57:49 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:40:02.894 09:57:49 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:02.895 09:57:49 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:02.895 rmmod nvme_tcp 00:40:02.895 rmmod nvme_fabrics 00:40:03.156 rmmod nvme_keyring 00:40:03.156 09:57:49 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:03.156 09:57:49 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:40:03.156 09:57:49 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:40:03.156 09:57:49 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 660372 ']' 00:40:03.156 09:57:49 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 660372 00:40:03.156 09:57:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 660372 ']' 00:40:03.156 09:57:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 660372 00:40:03.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (660372) - No such process 00:40:03.156 09:57:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 660372 is not found' 00:40:03.156 Process with pid 660372 is not found 00:40:03.156 09:57:49 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:40:03.156 09:57:49 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:06.462 Waiting for block devices as requested 00:40:06.462 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:06.462 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:06.462 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:06.723 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:06.723 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:06.723 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:06.985 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:06.985 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:06.985 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:07.246 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:07.246 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:07.506 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:07.506 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:07.506 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:07.768 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:07.768 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:07.768 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:08.028 09:57:54 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:08.028 09:57:54 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:08.028 09:57:54 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:40:08.028 09:57:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:40:08.028 09:57:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:08.028 09:57:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:40:08.028 09:57:54 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:08.028 09:57:54 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:08.028 09:57:54 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:08.028 09:57:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:08.028 09:57:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:10.574 09:57:56 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:10.574 00:40:10.574 real 0m52.259s 00:40:10.574 user 1m4.604s 00:40:10.574 sys 0m19.096s 00:40:10.574 09:57:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:10.574 09:57:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:10.574 ************************************ 00:40:10.574 END TEST nvmf_abort_qd_sizes 00:40:10.574 ************************************ 00:40:10.574 09:57:56 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:10.574 09:57:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:10.574 09:57:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:10.574 09:57:56 -- common/autotest_common.sh@10 -- # set +x 00:40:10.574 ************************************ 00:40:10.574 START TEST keyring_file 00:40:10.574 ************************************ 00:40:10.574 09:57:56 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:10.574 * Looking for test storage... 00:40:10.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:10.574 09:57:56 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:10.574 09:57:56 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:40:10.574 09:57:56 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:10.574 09:57:57 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@345 -- # : 1 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@353 -- # local d=1 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@355 -- # echo 1 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@353 -- # local d=2 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@355 -- # echo 2 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:10.574 09:57:57 keyring_file -- scripts/common.sh@368 -- # return 0 00:40:10.574 09:57:57 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:10.574 09:57:57 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:10.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.574 --rc genhtml_branch_coverage=1 00:40:10.574 --rc genhtml_function_coverage=1 00:40:10.574 --rc genhtml_legend=1 00:40:10.574 --rc geninfo_all_blocks=1 00:40:10.574 --rc geninfo_unexecuted_blocks=1 00:40:10.574 00:40:10.574 ' 00:40:10.574 09:57:57 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:10.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.574 --rc genhtml_branch_coverage=1 00:40:10.574 --rc genhtml_function_coverage=1 00:40:10.574 --rc genhtml_legend=1 00:40:10.574 --rc geninfo_all_blocks=1 00:40:10.574 --rc geninfo_unexecuted_blocks=1 00:40:10.574 00:40:10.574 ' 00:40:10.574 09:57:57 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:10.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.574 --rc genhtml_branch_coverage=1 00:40:10.574 --rc genhtml_function_coverage=1 00:40:10.574 --rc genhtml_legend=1 00:40:10.574 --rc geninfo_all_blocks=1 00:40:10.574 --rc geninfo_unexecuted_blocks=1 00:40:10.574 00:40:10.574 ' 00:40:10.574 09:57:57 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:10.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.574 --rc genhtml_branch_coverage=1 00:40:10.574 --rc genhtml_function_coverage=1 00:40:10.574 --rc genhtml_legend=1 00:40:10.574 --rc geninfo_all_blocks=1 00:40:10.574 --rc geninfo_unexecuted_blocks=1 00:40:10.574 00:40:10.574 ' 00:40:10.574 09:57:57 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:10.574 09:57:57 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:10.574 09:57:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:40:10.574 09:57:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:10.574 09:57:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:10.574 09:57:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:10.574 09:57:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:10.574 09:57:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:10.574 09:57:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:10.574 09:57:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:10.575 09:57:57 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:40:10.575 09:57:57 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:10.575 09:57:57 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:10.575 09:57:57 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:10.575 09:57:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.575 09:57:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.575 09:57:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.575 09:57:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:40:10.575 09:57:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@51 -- # : 0 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:10.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:10.575 09:57:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:10.575 09:57:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:10.575 09:57:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:40:10.575 09:57:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:40:10.575 09:57:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:40:10.575 09:57:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.x56Kz2pU0Y 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.x56Kz2pU0Y 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.x56Kz2pU0Y 00:40:10.575 09:57:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.x56Kz2pU0Y 00:40:10.575 09:57:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OkYZ3bnqd1 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:10.575 09:57:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OkYZ3bnqd1 00:40:10.575 09:57:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OkYZ3bnqd1 00:40:10.575 09:57:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.OkYZ3bnqd1 00:40:10.575 09:57:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=670663 00:40:10.575 09:57:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 670663 00:40:10.575 09:57:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:10.575 09:57:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 670663 ']' 00:40:10.575 09:57:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:10.575 09:57:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:10.575 09:57:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:10.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:10.575 09:57:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:10.575 09:57:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:10.575 [2024-11-19 09:57:57.284744] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:10.575 [2024-11-19 09:57:57.284800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670663 ] 00:40:10.837 [2024-11-19 09:57:57.374537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:10.837 [2024-11-19 09:57:57.423120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:11.408 09:57:58 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:11.408 09:57:58 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:11.408 09:57:58 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:40:11.408 09:57:58 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.408 09:57:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:11.408 [2024-11-19 09:57:58.100298] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:11.408 null0 00:40:11.408 [2024-11-19 09:57:58.132330] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:11.408 [2024-11-19 09:57:58.132671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:11.408 09:57:58 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.669 09:57:58 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:11.669 09:57:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:11.669 09:57:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:11.669 09:57:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:11.669 09:57:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:11.669 09:57:58 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:11.669 09:57:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:11.670 09:57:58 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:11.670 09:57:58 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.670 09:57:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:11.670 [2024-11-19 09:57:58.164390] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:40:11.670 request: 00:40:11.670 { 00:40:11.670 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:40:11.670 "secure_channel": false, 00:40:11.670 "listen_address": { 00:40:11.670 "trtype": "tcp", 00:40:11.670 "traddr": "127.0.0.1", 00:40:11.670 "trsvcid": "4420" 00:40:11.670 }, 00:40:11.670 "method": "nvmf_subsystem_add_listener", 00:40:11.670 "req_id": 1 00:40:11.670 } 00:40:11.670 Got JSON-RPC error response 00:40:11.670 response: 00:40:11.670 { 00:40:11.670 "code": -32602, 00:40:11.670 "message": "Invalid parameters" 00:40:11.670 } 00:40:11.670 09:57:58 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:11.670 09:57:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:11.670 09:57:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:11.670 09:57:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:11.670 09:57:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:11.670 09:57:58 keyring_file -- keyring/file.sh@47 -- # bperfpid=670713 00:40:11.670 09:57:58 keyring_file -- keyring/file.sh@49 -- # waitforlisten 670713 /var/tmp/bperf.sock 00:40:11.670 09:57:58 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 670713 ']' 00:40:11.670 09:57:58 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:40:11.670 09:57:58 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:11.670 09:57:58 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:11.670 09:57:58 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:11.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:11.670 09:57:58 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:11.670 09:57:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:11.670 [2024-11-19 09:57:58.226381] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:11.670 [2024-11-19 09:57:58.226449] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670713 ] 00:40:11.670 [2024-11-19 09:57:58.319472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:11.670 [2024-11-19 09:57:58.372975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:12.615 09:57:59 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:12.615 09:57:59 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:12.615 09:57:59 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.x56Kz2pU0Y 00:40:12.615 09:57:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.x56Kz2pU0Y 00:40:12.615 09:57:59 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.OkYZ3bnqd1 00:40:12.615 09:57:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.OkYZ3bnqd1 00:40:12.615 09:57:59 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:40:12.615 09:57:59 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:40:12.615 09:57:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:12.615 09:57:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:12.615 09:57:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:12.878 09:57:59 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.x56Kz2pU0Y == \/\t\m\p\/\t\m\p\.\x\5\6\K\z\2\p\U\0\Y ]] 00:40:12.878 09:57:59 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:40:12.878 09:57:59 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:40:12.878 09:57:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:12.878 09:57:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:12.878 09:57:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:13.139 09:57:59 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.OkYZ3bnqd1 == \/\t\m\p\/\t\m\p\.\O\k\Y\Z\3\b\n\q\d\1 ]] 00:40:13.139 09:57:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:40:13.139 09:57:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:13.139 09:57:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:13.139 09:57:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:13.139 09:57:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:13.139 09:57:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:13.139 09:57:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:40:13.139 09:57:59 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:40:13.139 09:57:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:13.139 09:57:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:13.139 09:57:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:13.139 09:57:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:13.139 09:57:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:13.400 09:58:00 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:40:13.400 09:58:00 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:13.400 09:58:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:13.662 [2024-11-19 09:58:00.232451] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:13.662 nvme0n1 00:40:13.662 09:58:00 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:40:13.662 09:58:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:13.662 09:58:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:13.662 09:58:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:13.662 09:58:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:13.662 09:58:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:13.923 09:58:00 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:40:13.923 09:58:00 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:40:13.923 09:58:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:13.923 09:58:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:13.923 09:58:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:13.923 09:58:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:13.923 09:58:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:14.185 09:58:00 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:40:14.185 09:58:00 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:14.185 Running I/O for 1 seconds... 00:40:15.128 18432.00 IOPS, 72.00 MiB/s 00:40:15.128 Latency(us) 00:40:15.128 [2024-11-19T08:58:01.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:15.128 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:40:15.128 nvme0n1 : 1.00 18491.44 72.23 0.00 0.00 6909.55 2307.41 16056.32 00:40:15.128 [2024-11-19T08:58:01.876Z] =================================================================================================================== 00:40:15.128 [2024-11-19T08:58:01.876Z] Total : 18491.44 72.23 0.00 0.00 6909.55 2307.41 16056.32 00:40:15.128 { 00:40:15.128 "results": [ 00:40:15.128 { 00:40:15.128 "job": "nvme0n1", 00:40:15.128 "core_mask": "0x2", 00:40:15.128 "workload": "randrw", 00:40:15.128 "percentage": 50, 00:40:15.128 "status": "finished", 00:40:15.128 "queue_depth": 128, 00:40:15.128 "io_size": 4096, 00:40:15.128 "runtime": 1.003816, 00:40:15.128 "iops": 18491.436677638132, 00:40:15.128 "mibps": 72.23217452202395, 00:40:15.128 "io_failed": 0, 00:40:15.128 "io_timeout": 0, 00:40:15.128 "avg_latency_us": 6909.545092123693, 00:40:15.128 "min_latency_us": 2307.4133333333334, 00:40:15.128 "max_latency_us": 16056.32 00:40:15.128 } 00:40:15.128 ], 00:40:15.128 "core_count": 1 00:40:15.128 } 00:40:15.128 09:58:01 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:15.128 09:58:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:15.389 09:58:02 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:40:15.389 09:58:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:15.389 09:58:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:15.389 09:58:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:15.389 09:58:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:15.390 09:58:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:15.651 09:58:02 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:40:15.651 09:58:02 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:40:15.651 09:58:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:15.651 09:58:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:15.651 09:58:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:15.651 09:58:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:15.651 09:58:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:15.651 09:58:02 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:40:15.651 09:58:02 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:15.651 09:58:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:15.651 09:58:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:15.651 09:58:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:15.651 09:58:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:15.651 09:58:02 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:15.651 09:58:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:15.651 09:58:02 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:15.651 09:58:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:15.912 [2024-11-19 09:58:02.545448] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:15.912 [2024-11-19 09:58:02.546206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x804c10 (107): Transport endpoint is not connected 00:40:15.912 [2024-11-19 09:58:02.547202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x804c10 (9): Bad file descriptor 00:40:15.912 [2024-11-19 09:58:02.548203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:15.912 [2024-11-19 09:58:02.548211] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:15.912 [2024-11-19 09:58:02.548218] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:15.912 [2024-11-19 09:58:02.548224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:15.912 request: 00:40:15.912 { 00:40:15.912 "name": "nvme0", 00:40:15.912 "trtype": "tcp", 00:40:15.912 "traddr": "127.0.0.1", 00:40:15.912 "adrfam": "ipv4", 00:40:15.912 "trsvcid": "4420", 00:40:15.912 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:15.912 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:15.912 "prchk_reftag": false, 00:40:15.912 "prchk_guard": false, 00:40:15.912 "hdgst": false, 00:40:15.912 "ddgst": false, 00:40:15.912 "psk": "key1", 00:40:15.912 "allow_unrecognized_csi": false, 00:40:15.912 "method": "bdev_nvme_attach_controller", 00:40:15.912 "req_id": 1 00:40:15.912 } 00:40:15.912 Got JSON-RPC error response 00:40:15.912 response: 00:40:15.912 { 00:40:15.912 "code": -5, 00:40:15.912 "message": "Input/output error" 00:40:15.912 } 00:40:15.912 09:58:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:15.912 09:58:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:15.912 09:58:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:15.912 09:58:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:15.912 09:58:02 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:40:15.912 09:58:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:15.912 09:58:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:15.912 09:58:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:15.912 09:58:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:15.912 09:58:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:16.173 09:58:02 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:40:16.173 09:58:02 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:40:16.173 09:58:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:16.173 09:58:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:16.173 09:58:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:16.173 09:58:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:16.173 09:58:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:16.435 09:58:02 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:40:16.435 09:58:02 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:40:16.435 09:58:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:16.435 09:58:03 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:40:16.435 09:58:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:40:16.696 09:58:03 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:40:16.696 09:58:03 keyring_file -- keyring/file.sh@78 -- # jq length 00:40:16.696 09:58:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:16.956 09:58:03 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:40:16.956 09:58:03 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.x56Kz2pU0Y 00:40:16.956 09:58:03 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.x56Kz2pU0Y 00:40:16.956 09:58:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:16.956 09:58:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.x56Kz2pU0Y 00:40:16.956 09:58:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:16.956 09:58:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:16.956 09:58:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:16.956 09:58:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:16.956 09:58:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.x56Kz2pU0Y 00:40:16.956 09:58:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.x56Kz2pU0Y 00:40:16.956 [2024-11-19 09:58:03.608952] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.x56Kz2pU0Y': 0100660 00:40:16.956 [2024-11-19 09:58:03.608971] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:40:16.956 request: 00:40:16.956 { 00:40:16.956 "name": "key0", 00:40:16.956 "path": "/tmp/tmp.x56Kz2pU0Y", 00:40:16.956 "method": "keyring_file_add_key", 00:40:16.956 "req_id": 1 00:40:16.956 } 00:40:16.956 Got JSON-RPC error response 00:40:16.956 response: 00:40:16.956 { 00:40:16.956 "code": -1, 00:40:16.956 "message": "Operation not permitted" 00:40:16.956 } 00:40:16.956 09:58:03 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:16.956 09:58:03 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:16.956 09:58:03 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:16.956 09:58:03 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:16.956 09:58:03 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.x56Kz2pU0Y 00:40:16.956 09:58:03 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.x56Kz2pU0Y 00:40:16.956 09:58:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.x56Kz2pU0Y 00:40:17.217 09:58:03 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.x56Kz2pU0Y 00:40:17.217 09:58:03 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:40:17.217 09:58:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:17.217 09:58:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:17.217 09:58:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:17.217 09:58:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:17.217 09:58:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:17.478 09:58:03 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:40:17.478 09:58:03 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:17.478 09:58:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:17.478 09:58:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:17.478 09:58:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:17.478 09:58:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:17.478 09:58:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:17.478 09:58:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:17.478 09:58:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:17.478 09:58:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:17.478 [2024-11-19 09:58:04.118250] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.x56Kz2pU0Y': No such file or directory 00:40:17.478 [2024-11-19 09:58:04.118263] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:40:17.478 [2024-11-19 09:58:04.118275] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:40:17.478 [2024-11-19 09:58:04.118281] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:40:17.478 [2024-11-19 09:58:04.118287] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:17.478 [2024-11-19 09:58:04.118291] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:40:17.478 request: 00:40:17.478 { 00:40:17.478 "name": "nvme0", 00:40:17.478 "trtype": "tcp", 00:40:17.478 "traddr": "127.0.0.1", 00:40:17.478 "adrfam": "ipv4", 00:40:17.478 "trsvcid": "4420", 00:40:17.478 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:17.478 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:17.478 "prchk_reftag": false, 00:40:17.478 "prchk_guard": false, 00:40:17.478 "hdgst": false, 00:40:17.478 "ddgst": false, 00:40:17.478 "psk": "key0", 00:40:17.478 "allow_unrecognized_csi": false, 00:40:17.478 "method": "bdev_nvme_attach_controller", 00:40:17.478 "req_id": 1 00:40:17.478 } 00:40:17.478 Got JSON-RPC error response 00:40:17.478 response: 00:40:17.479 { 00:40:17.479 "code": -19, 00:40:17.479 "message": "No such device" 00:40:17.479 } 00:40:17.479 09:58:04 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:17.479 09:58:04 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:17.479 09:58:04 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:17.479 09:58:04 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:17.479 09:58:04 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:40:17.479 09:58:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:17.740 09:58:04 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:17.740 09:58:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:17.740 09:58:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:17.740 09:58:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:17.740 09:58:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:17.740 09:58:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:17.740 09:58:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.h9aBz4bcUj 00:40:17.740 09:58:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:17.740 09:58:04 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:17.740 09:58:04 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:17.740 09:58:04 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:17.740 09:58:04 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:17.740 09:58:04 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:17.740 09:58:04 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:17.740 09:58:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.h9aBz4bcUj 00:40:17.740 09:58:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.h9aBz4bcUj 00:40:17.740 09:58:04 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.h9aBz4bcUj 00:40:17.740 09:58:04 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.h9aBz4bcUj 00:40:17.740 09:58:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.h9aBz4bcUj 00:40:18.001 09:58:04 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:18.001 09:58:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:18.001 nvme0n1 00:40:18.262 09:58:04 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:40:18.262 09:58:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:18.262 09:58:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:18.262 09:58:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:18.262 09:58:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:18.262 09:58:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:18.262 09:58:04 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:40:18.262 09:58:04 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:40:18.262 09:58:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:18.523 09:58:05 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:40:18.523 09:58:05 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:40:18.523 09:58:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:18.523 09:58:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:18.523 09:58:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:18.784 09:58:05 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:40:18.784 09:58:05 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:40:18.784 09:58:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:18.784 09:58:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:18.784 09:58:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:18.784 09:58:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:18.784 09:58:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:18.784 09:58:05 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:40:18.784 09:58:05 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:18.784 09:58:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:19.049 09:58:05 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:40:19.049 09:58:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:19.049 09:58:05 keyring_file -- keyring/file.sh@105 -- # jq length 00:40:19.310 09:58:05 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:40:19.310 09:58:05 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.h9aBz4bcUj 00:40:19.310 09:58:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.h9aBz4bcUj 00:40:19.310 09:58:06 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.OkYZ3bnqd1 00:40:19.310 09:58:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.OkYZ3bnqd1 00:40:19.572 09:58:06 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:19.572 09:58:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:19.832 nvme0n1 00:40:19.832 09:58:06 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:40:19.832 09:58:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:40:20.094 09:58:06 keyring_file -- keyring/file.sh@113 -- # config='{ 00:40:20.094 "subsystems": [ 00:40:20.094 { 00:40:20.094 "subsystem": "keyring", 00:40:20.094 "config": [ 00:40:20.094 { 00:40:20.094 "method": "keyring_file_add_key", 00:40:20.094 "params": { 00:40:20.094 "name": "key0", 00:40:20.094 "path": "/tmp/tmp.h9aBz4bcUj" 00:40:20.094 } 00:40:20.094 }, 00:40:20.094 { 00:40:20.094 "method": "keyring_file_add_key", 00:40:20.094 "params": { 00:40:20.094 "name": "key1", 00:40:20.094 "path": "/tmp/tmp.OkYZ3bnqd1" 00:40:20.094 } 00:40:20.094 } 00:40:20.094 ] 00:40:20.094 }, 00:40:20.094 { 00:40:20.094 "subsystem": "iobuf", 00:40:20.094 "config": [ 00:40:20.094 { 00:40:20.094 "method": "iobuf_set_options", 00:40:20.094 "params": { 00:40:20.094 "small_pool_count": 8192, 00:40:20.094 "large_pool_count": 1024, 00:40:20.094 "small_bufsize": 8192, 00:40:20.094 "large_bufsize": 135168, 00:40:20.094 "enable_numa": false 00:40:20.094 } 00:40:20.094 } 00:40:20.094 ] 00:40:20.094 }, 00:40:20.094 { 00:40:20.094 "subsystem": "sock", 00:40:20.094 "config": [ 00:40:20.094 { 00:40:20.094 "method": "sock_set_default_impl", 00:40:20.094 "params": { 00:40:20.094 "impl_name": "posix" 00:40:20.094 } 00:40:20.094 }, 00:40:20.094 { 00:40:20.094 "method": "sock_impl_set_options", 00:40:20.094 "params": { 00:40:20.094 "impl_name": "ssl", 00:40:20.094 "recv_buf_size": 4096, 00:40:20.094 "send_buf_size": 4096, 00:40:20.094 "enable_recv_pipe": true, 00:40:20.094 "enable_quickack": false, 00:40:20.094 "enable_placement_id": 0, 00:40:20.094 "enable_zerocopy_send_server": true, 00:40:20.094 "enable_zerocopy_send_client": false, 00:40:20.094 "zerocopy_threshold": 0, 00:40:20.094 "tls_version": 0, 00:40:20.094 "enable_ktls": false 00:40:20.094 } 00:40:20.094 }, 00:40:20.094 { 00:40:20.094 "method": "sock_impl_set_options", 00:40:20.094 "params": { 00:40:20.094 "impl_name": "posix", 00:40:20.094 "recv_buf_size": 2097152, 00:40:20.094 "send_buf_size": 2097152, 00:40:20.094 "enable_recv_pipe": true, 00:40:20.094 "enable_quickack": false, 00:40:20.094 "enable_placement_id": 0, 00:40:20.094 "enable_zerocopy_send_server": true, 00:40:20.094 "enable_zerocopy_send_client": false, 00:40:20.094 "zerocopy_threshold": 0, 00:40:20.094 "tls_version": 0, 00:40:20.094 "enable_ktls": false 00:40:20.094 } 00:40:20.094 } 00:40:20.094 ] 00:40:20.094 }, 00:40:20.094 { 00:40:20.094 "subsystem": "vmd", 00:40:20.094 "config": [] 00:40:20.094 }, 00:40:20.094 { 00:40:20.094 "subsystem": "accel", 00:40:20.094 "config": [ 00:40:20.094 { 00:40:20.094 "method": "accel_set_options", 00:40:20.094 "params": { 00:40:20.094 "small_cache_size": 128, 00:40:20.094 "large_cache_size": 16, 00:40:20.094 "task_count": 2048, 00:40:20.094 "sequence_count": 2048, 00:40:20.094 "buf_count": 2048 00:40:20.094 } 00:40:20.094 } 00:40:20.094 ] 00:40:20.094 }, 00:40:20.094 { 00:40:20.094 "subsystem": "bdev", 00:40:20.094 "config": [ 00:40:20.094 { 00:40:20.094 "method": "bdev_set_options", 00:40:20.094 "params": { 00:40:20.094 "bdev_io_pool_size": 65535, 00:40:20.094 "bdev_io_cache_size": 256, 00:40:20.094 "bdev_auto_examine": true, 00:40:20.094 "iobuf_small_cache_size": 128, 00:40:20.094 "iobuf_large_cache_size": 16 00:40:20.094 } 00:40:20.094 }, 00:40:20.094 { 00:40:20.094 "method": "bdev_raid_set_options", 00:40:20.094 "params": { 00:40:20.094 "process_window_size_kb": 1024, 00:40:20.094 "process_max_bandwidth_mb_sec": 0 00:40:20.094 } 00:40:20.094 }, 00:40:20.094 { 00:40:20.094 "method": "bdev_iscsi_set_options", 00:40:20.094 "params": { 00:40:20.094 "timeout_sec": 30 00:40:20.094 } 00:40:20.094 }, 00:40:20.094 { 00:40:20.094 "method": "bdev_nvme_set_options", 00:40:20.094 "params": { 00:40:20.094 "action_on_timeout": "none", 00:40:20.094 "timeout_us": 0, 00:40:20.094 "timeout_admin_us": 0, 00:40:20.094 "keep_alive_timeout_ms": 10000, 00:40:20.094 "arbitration_burst": 0, 00:40:20.094 "low_priority_weight": 0, 00:40:20.094 "medium_priority_weight": 0, 00:40:20.094 "high_priority_weight": 0, 00:40:20.094 "nvme_adminq_poll_period_us": 10000, 00:40:20.094 "nvme_ioq_poll_period_us": 0, 00:40:20.094 "io_queue_requests": 512, 00:40:20.094 "delay_cmd_submit": true, 00:40:20.094 "transport_retry_count": 4, 00:40:20.094 "bdev_retry_count": 3, 00:40:20.094 "transport_ack_timeout": 0, 00:40:20.094 "ctrlr_loss_timeout_sec": 0, 00:40:20.094 "reconnect_delay_sec": 0, 00:40:20.094 "fast_io_fail_timeout_sec": 0, 00:40:20.094 "disable_auto_failback": false, 00:40:20.094 "generate_uuids": false, 00:40:20.094 "transport_tos": 0, 00:40:20.094 "nvme_error_stat": false, 00:40:20.094 "rdma_srq_size": 0, 00:40:20.094 "io_path_stat": false, 00:40:20.094 "allow_accel_sequence": false, 00:40:20.094 "rdma_max_cq_size": 0, 00:40:20.094 "rdma_cm_event_timeout_ms": 0, 00:40:20.094 "dhchap_digests": [ 00:40:20.094 "sha256", 00:40:20.094 "sha384", 00:40:20.094 "sha512" 00:40:20.094 ], 00:40:20.094 "dhchap_dhgroups": [ 00:40:20.094 "null", 00:40:20.094 "ffdhe2048", 00:40:20.094 "ffdhe3072", 00:40:20.094 "ffdhe4096", 00:40:20.094 "ffdhe6144", 00:40:20.094 "ffdhe8192" 00:40:20.094 ] 00:40:20.094 } 00:40:20.094 }, 00:40:20.094 { 00:40:20.094 "method": "bdev_nvme_attach_controller", 00:40:20.094 "params": { 00:40:20.094 "name": "nvme0", 00:40:20.094 "trtype": "TCP", 00:40:20.094 "adrfam": "IPv4", 00:40:20.094 "traddr": "127.0.0.1", 00:40:20.094 "trsvcid": "4420", 00:40:20.094 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:20.094 "prchk_reftag": false, 00:40:20.095 "prchk_guard": false, 00:40:20.095 "ctrlr_loss_timeout_sec": 0, 00:40:20.095 "reconnect_delay_sec": 0, 00:40:20.095 "fast_io_fail_timeout_sec": 0, 00:40:20.095 "psk": "key0", 00:40:20.095 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:20.095 "hdgst": false, 00:40:20.095 "ddgst": false, 00:40:20.095 "multipath": "multipath" 00:40:20.095 } 00:40:20.095 }, 00:40:20.095 { 00:40:20.095 "method": "bdev_nvme_set_hotplug", 00:40:20.095 "params": { 00:40:20.095 "period_us": 100000, 00:40:20.095 "enable": false 00:40:20.095 } 00:40:20.095 }, 00:40:20.095 { 00:40:20.095 "method": "bdev_wait_for_examine" 00:40:20.095 } 00:40:20.095 ] 00:40:20.095 }, 00:40:20.095 { 00:40:20.095 "subsystem": "nbd", 00:40:20.095 "config": [] 00:40:20.095 } 00:40:20.095 ] 00:40:20.095 }' 00:40:20.095 09:58:06 keyring_file -- keyring/file.sh@115 -- # killprocess 670713 00:40:20.095 09:58:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 670713 ']' 00:40:20.095 09:58:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 670713 00:40:20.095 09:58:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:20.095 09:58:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:20.095 09:58:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 670713 00:40:20.095 09:58:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:20.095 09:58:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:20.095 09:58:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 670713' 00:40:20.095 killing process with pid 670713 00:40:20.095 09:58:06 keyring_file -- common/autotest_common.sh@973 -- # kill 670713 00:40:20.095 Received shutdown signal, test time was about 1.000000 seconds 00:40:20.095 00:40:20.095 Latency(us) 00:40:20.095 [2024-11-19T08:58:06.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:20.095 [2024-11-19T08:58:06.843Z] =================================================================================================================== 00:40:20.095 [2024-11-19T08:58:06.843Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:20.095 09:58:06 keyring_file -- common/autotest_common.sh@978 -- # wait 670713 00:40:20.356 09:58:06 keyring_file -- keyring/file.sh@118 -- # bperfpid=672529 00:40:20.356 09:58:06 keyring_file -- keyring/file.sh@120 -- # waitforlisten 672529 /var/tmp/bperf.sock 00:40:20.356 09:58:06 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 672529 ']' 00:40:20.356 09:58:06 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:20.356 09:58:06 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:20.356 09:58:06 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:40:20.356 09:58:06 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:20.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:20.356 09:58:06 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:20.356 09:58:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:20.356 09:58:06 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:40:20.356 "subsystems": [ 00:40:20.356 { 00:40:20.356 "subsystem": "keyring", 00:40:20.356 "config": [ 00:40:20.356 { 00:40:20.356 "method": "keyring_file_add_key", 00:40:20.356 "params": { 00:40:20.356 "name": "key0", 00:40:20.356 "path": "/tmp/tmp.h9aBz4bcUj" 00:40:20.356 } 00:40:20.356 }, 00:40:20.356 { 00:40:20.356 "method": "keyring_file_add_key", 00:40:20.356 "params": { 00:40:20.356 "name": "key1", 00:40:20.356 "path": "/tmp/tmp.OkYZ3bnqd1" 00:40:20.356 } 00:40:20.356 } 00:40:20.356 ] 00:40:20.356 }, 00:40:20.356 { 00:40:20.356 "subsystem": "iobuf", 00:40:20.356 "config": [ 00:40:20.356 { 00:40:20.356 "method": "iobuf_set_options", 00:40:20.356 "params": { 00:40:20.356 "small_pool_count": 8192, 00:40:20.356 "large_pool_count": 1024, 00:40:20.356 "small_bufsize": 8192, 00:40:20.356 "large_bufsize": 135168, 00:40:20.356 "enable_numa": false 00:40:20.356 } 00:40:20.356 } 00:40:20.356 ] 00:40:20.356 }, 00:40:20.356 { 00:40:20.356 "subsystem": "sock", 00:40:20.356 "config": [ 00:40:20.356 { 00:40:20.356 "method": "sock_set_default_impl", 00:40:20.356 "params": { 00:40:20.356 "impl_name": "posix" 00:40:20.356 } 00:40:20.356 }, 00:40:20.356 { 00:40:20.356 "method": "sock_impl_set_options", 00:40:20.356 "params": { 00:40:20.356 "impl_name": "ssl", 00:40:20.356 "recv_buf_size": 4096, 00:40:20.356 "send_buf_size": 4096, 00:40:20.356 "enable_recv_pipe": true, 00:40:20.356 "enable_quickack": false, 00:40:20.356 "enable_placement_id": 0, 00:40:20.356 "enable_zerocopy_send_server": true, 00:40:20.356 "enable_zerocopy_send_client": false, 00:40:20.356 "zerocopy_threshold": 0, 00:40:20.356 "tls_version": 0, 00:40:20.356 "enable_ktls": false 00:40:20.356 } 00:40:20.356 }, 00:40:20.356 { 00:40:20.356 "method": "sock_impl_set_options", 00:40:20.356 "params": { 00:40:20.356 "impl_name": "posix", 00:40:20.356 "recv_buf_size": 2097152, 00:40:20.356 "send_buf_size": 2097152, 00:40:20.356 "enable_recv_pipe": true, 00:40:20.356 "enable_quickack": false, 00:40:20.356 "enable_placement_id": 0, 00:40:20.356 "enable_zerocopy_send_server": true, 00:40:20.356 "enable_zerocopy_send_client": false, 00:40:20.356 "zerocopy_threshold": 0, 00:40:20.356 "tls_version": 0, 00:40:20.356 "enable_ktls": false 00:40:20.356 } 00:40:20.356 } 00:40:20.356 ] 00:40:20.356 }, 00:40:20.356 { 00:40:20.356 "subsystem": "vmd", 00:40:20.356 "config": [] 00:40:20.356 }, 00:40:20.356 { 00:40:20.356 "subsystem": "accel", 00:40:20.356 "config": [ 00:40:20.356 { 00:40:20.356 "method": "accel_set_options", 00:40:20.356 "params": { 00:40:20.356 "small_cache_size": 128, 00:40:20.356 "large_cache_size": 16, 00:40:20.356 "task_count": 2048, 00:40:20.356 "sequence_count": 2048, 00:40:20.356 "buf_count": 2048 00:40:20.356 } 00:40:20.356 } 00:40:20.356 ] 00:40:20.356 }, 00:40:20.356 { 00:40:20.356 "subsystem": "bdev", 00:40:20.356 "config": [ 00:40:20.356 { 00:40:20.356 "method": "bdev_set_options", 00:40:20.357 "params": { 00:40:20.357 "bdev_io_pool_size": 65535, 00:40:20.357 "bdev_io_cache_size": 256, 00:40:20.357 "bdev_auto_examine": true, 00:40:20.357 "iobuf_small_cache_size": 128, 00:40:20.357 "iobuf_large_cache_size": 16 00:40:20.357 } 00:40:20.357 }, 00:40:20.357 { 00:40:20.357 "method": "bdev_raid_set_options", 00:40:20.357 "params": { 00:40:20.357 "process_window_size_kb": 1024, 00:40:20.357 "process_max_bandwidth_mb_sec": 0 00:40:20.357 } 00:40:20.357 }, 00:40:20.357 { 00:40:20.357 "method": "bdev_iscsi_set_options", 00:40:20.357 "params": { 00:40:20.357 "timeout_sec": 30 00:40:20.357 } 00:40:20.357 }, 00:40:20.357 { 00:40:20.357 "method": "bdev_nvme_set_options", 00:40:20.357 "params": { 00:40:20.357 "action_on_timeout": "none", 00:40:20.357 "timeout_us": 0, 00:40:20.357 "timeout_admin_us": 0, 00:40:20.357 "keep_alive_timeout_ms": 10000, 00:40:20.357 "arbitration_burst": 0, 00:40:20.357 "low_priority_weight": 0, 00:40:20.357 "medium_priority_weight": 0, 00:40:20.357 "high_priority_weight": 0, 00:40:20.357 "nvme_adminq_poll_period_us": 10000, 00:40:20.357 "nvme_ioq_poll_period_us": 0, 00:40:20.357 "io_queue_requests": 512, 00:40:20.357 "delay_cmd_submit": true, 00:40:20.357 "transport_retry_count": 4, 00:40:20.357 "bdev_retry_count": 3, 00:40:20.357 "transport_ack_timeout": 0, 00:40:20.357 "ctrlr_loss_timeout_sec": 0, 00:40:20.357 "reconnect_delay_sec": 0, 00:40:20.357 "fast_io_fail_timeout_sec": 0, 00:40:20.357 "disable_auto_failback": false, 00:40:20.357 "generate_uuids": false, 00:40:20.357 "transport_tos": 0, 00:40:20.357 "nvme_error_stat": false, 00:40:20.357 "rdma_srq_size": 0, 00:40:20.357 "io_path_stat": false, 00:40:20.357 "allow_accel_sequence": false, 00:40:20.357 "rdma_max_cq_size": 0, 00:40:20.357 "rdma_cm_event_timeout_ms": 0, 00:40:20.357 "dhchap_digests": [ 00:40:20.357 "sha256", 00:40:20.357 "sha384", 00:40:20.357 "sha512" 00:40:20.357 ], 00:40:20.357 "dhchap_dhgroups": [ 00:40:20.357 "null", 00:40:20.357 "ffdhe2048", 00:40:20.357 "ffdhe3072", 00:40:20.357 "ffdhe4096", 00:40:20.357 "ffdhe6144", 00:40:20.357 "ffdhe8192" 00:40:20.357 ] 00:40:20.357 } 00:40:20.357 }, 00:40:20.357 { 00:40:20.357 "method": "bdev_nvme_attach_controller", 00:40:20.357 "params": { 00:40:20.357 "name": "nvme0", 00:40:20.357 "trtype": "TCP", 00:40:20.357 "adrfam": "IPv4", 00:40:20.357 "traddr": "127.0.0.1", 00:40:20.357 "trsvcid": "4420", 00:40:20.357 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:20.357 "prchk_reftag": false, 00:40:20.357 "prchk_guard": false, 00:40:20.357 "ctrlr_loss_timeout_sec": 0, 00:40:20.357 "reconnect_delay_sec": 0, 00:40:20.357 "fast_io_fail_timeout_sec": 0, 00:40:20.357 "psk": "key0", 00:40:20.357 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:20.357 "hdgst": false, 00:40:20.357 "ddgst": false, 00:40:20.357 "multipath": "multipath" 00:40:20.357 } 00:40:20.357 }, 00:40:20.357 { 00:40:20.357 "method": "bdev_nvme_set_hotplug", 00:40:20.357 "params": { 00:40:20.357 "period_us": 100000, 00:40:20.357 "enable": false 00:40:20.357 } 00:40:20.357 }, 00:40:20.357 { 00:40:20.357 "method": "bdev_wait_for_examine" 00:40:20.357 } 00:40:20.357 ] 00:40:20.357 }, 00:40:20.357 { 00:40:20.357 "subsystem": "nbd", 00:40:20.357 "config": [] 00:40:20.357 } 00:40:20.357 ] 00:40:20.357 }' 00:40:20.357 [2024-11-19 09:58:06.910442] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:20.357 [2024-11-19 09:58:06.910495] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672529 ] 00:40:20.357 [2024-11-19 09:58:06.997605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:20.357 [2024-11-19 09:58:07.026312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:20.618 [2024-11-19 09:58:07.169146] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:21.189 09:58:07 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:21.189 09:58:07 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:21.189 09:58:07 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:40:21.189 09:58:07 keyring_file -- keyring/file.sh@121 -- # jq length 00:40:21.189 09:58:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:21.189 09:58:07 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:21.189 09:58:07 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:40:21.189 09:58:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:21.189 09:58:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:21.190 09:58:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:21.190 09:58:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:21.190 09:58:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:21.450 09:58:08 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:40:21.450 09:58:08 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:40:21.450 09:58:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:21.450 09:58:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:21.450 09:58:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:21.450 09:58:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:21.450 09:58:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:21.712 09:58:08 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:40:21.712 09:58:08 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:40:21.712 09:58:08 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:40:21.712 09:58:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:21.973 09:58:08 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:40:21.973 09:58:08 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:21.973 09:58:08 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.h9aBz4bcUj /tmp/tmp.OkYZ3bnqd1 00:40:21.973 09:58:08 keyring_file -- keyring/file.sh@20 -- # killprocess 672529 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 672529 ']' 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@958 -- # kill -0 672529 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 672529 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 672529' 00:40:21.973 killing process with pid 672529 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@973 -- # kill 672529 00:40:21.973 Received shutdown signal, test time was about 1.000000 seconds 00:40:21.973 00:40:21.973 Latency(us) 00:40:21.973 [2024-11-19T08:58:08.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:21.973 [2024-11-19T08:58:08.721Z] =================================================================================================================== 00:40:21.973 [2024-11-19T08:58:08.721Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@978 -- # wait 672529 00:40:21.973 09:58:08 keyring_file -- keyring/file.sh@21 -- # killprocess 670663 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 670663 ']' 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@958 -- # kill -0 670663 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 670663 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 670663' 00:40:21.973 killing process with pid 670663 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@973 -- # kill 670663 00:40:21.973 09:58:08 keyring_file -- common/autotest_common.sh@978 -- # wait 670663 00:40:22.234 00:40:22.234 real 0m11.994s 00:40:22.234 user 0m29.039s 00:40:22.234 sys 0m2.687s 00:40:22.234 09:58:08 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:22.234 09:58:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:22.234 ************************************ 00:40:22.234 END TEST keyring_file 00:40:22.234 ************************************ 00:40:22.234 09:58:08 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:40:22.234 09:58:08 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:22.234 09:58:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:22.234 09:58:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:22.234 09:58:08 -- common/autotest_common.sh@10 -- # set +x 00:40:22.234 ************************************ 00:40:22.234 START TEST keyring_linux 00:40:22.234 ************************************ 00:40:22.234 09:58:08 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:22.497 Joined session keyring: 345512665 00:40:22.497 * Looking for test storage... 00:40:22.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:22.497 09:58:09 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:22.497 09:58:09 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:40:22.497 09:58:09 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:22.497 09:58:09 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@345 -- # : 1 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@368 -- # return 0 00:40:22.497 09:58:09 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:22.497 09:58:09 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:22.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:22.497 --rc genhtml_branch_coverage=1 00:40:22.497 --rc genhtml_function_coverage=1 00:40:22.497 --rc genhtml_legend=1 00:40:22.497 --rc geninfo_all_blocks=1 00:40:22.497 --rc geninfo_unexecuted_blocks=1 00:40:22.497 00:40:22.497 ' 00:40:22.497 09:58:09 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:22.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:22.497 --rc genhtml_branch_coverage=1 00:40:22.497 --rc genhtml_function_coverage=1 00:40:22.497 --rc genhtml_legend=1 00:40:22.497 --rc geninfo_all_blocks=1 00:40:22.497 --rc geninfo_unexecuted_blocks=1 00:40:22.497 00:40:22.497 ' 00:40:22.497 09:58:09 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:22.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:22.497 --rc genhtml_branch_coverage=1 00:40:22.497 --rc genhtml_function_coverage=1 00:40:22.497 --rc genhtml_legend=1 00:40:22.497 --rc geninfo_all_blocks=1 00:40:22.497 --rc geninfo_unexecuted_blocks=1 00:40:22.497 00:40:22.497 ' 00:40:22.497 09:58:09 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:22.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:22.497 --rc genhtml_branch_coverage=1 00:40:22.497 --rc genhtml_function_coverage=1 00:40:22.497 --rc genhtml_legend=1 00:40:22.497 --rc geninfo_all_blocks=1 00:40:22.497 --rc geninfo_unexecuted_blocks=1 00:40:22.497 00:40:22.497 ' 00:40:22.497 09:58:09 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:22.497 09:58:09 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:22.497 09:58:09 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:22.497 09:58:09 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.497 09:58:09 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.497 09:58:09 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.497 09:58:09 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:22.497 09:58:09 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:22.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:22.497 09:58:09 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:22.497 09:58:09 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:22.497 09:58:09 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:22.497 09:58:09 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:22.497 09:58:09 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:22.497 09:58:09 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:22.497 09:58:09 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:22.497 09:58:09 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:22.497 09:58:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:22.497 09:58:09 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:22.497 09:58:09 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:22.497 09:58:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:22.498 09:58:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:22.498 09:58:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:22.498 09:58:09 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:22.498 09:58:09 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:40:22.498 09:58:09 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:22.498 09:58:09 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:22.498 09:58:09 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:40:22.498 09:58:09 keyring_linux -- nvmf/common.sh@733 -- # python - 00:40:22.759 09:58:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:22.759 09:58:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:22.759 /tmp/:spdk-test:key0 00:40:22.759 09:58:09 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:22.759 09:58:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:22.759 09:58:09 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:22.759 09:58:09 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:22.759 09:58:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:22.759 09:58:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:22.759 09:58:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:22.759 09:58:09 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:22.759 09:58:09 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:40:22.759 09:58:09 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:22.759 09:58:09 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:40:22.759 09:58:09 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:40:22.759 09:58:09 keyring_linux -- nvmf/common.sh@733 -- # python - 00:40:22.759 09:58:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:22.759 09:58:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:22.759 /tmp/:spdk-test:key1 00:40:22.759 09:58:09 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=672978 00:40:22.759 09:58:09 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 672978 00:40:22.759 09:58:09 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:22.759 09:58:09 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 672978 ']' 00:40:22.759 09:58:09 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:22.759 09:58:09 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:22.759 09:58:09 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:22.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:22.759 09:58:09 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:22.759 09:58:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:22.759 [2024-11-19 09:58:09.372545] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:22.759 [2024-11-19 09:58:09.372625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672978 ] 00:40:22.759 [2024-11-19 09:58:09.461598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:22.759 [2024-11-19 09:58:09.502130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.701 09:58:10 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:23.701 09:58:10 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:40:23.701 09:58:10 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:23.701 09:58:10 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.701 09:58:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:23.701 [2024-11-19 09:58:10.176392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:23.701 null0 00:40:23.701 [2024-11-19 09:58:10.208448] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:23.701 [2024-11-19 09:58:10.208789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:23.701 09:58:10 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.701 09:58:10 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:23.701 260627782 00:40:23.701 09:58:10 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:23.701 341176663 00:40:23.701 09:58:10 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=673298 00:40:23.701 09:58:10 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 673298 /var/tmp/bperf.sock 00:40:23.701 09:58:10 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:23.701 09:58:10 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 673298 ']' 00:40:23.701 09:58:10 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:23.701 09:58:10 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:23.701 09:58:10 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:23.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:23.701 09:58:10 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:23.701 09:58:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:23.701 [2024-11-19 09:58:10.287917] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:23.701 [2024-11-19 09:58:10.287966] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673298 ] 00:40:23.701 [2024-11-19 09:58:10.370633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:23.701 [2024-11-19 09:58:10.400305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:24.644 09:58:11 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:24.644 09:58:11 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:40:24.644 09:58:11 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:24.644 09:58:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:24.644 09:58:11 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:24.644 09:58:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:24.903 09:58:11 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:24.903 09:58:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:25.162 [2024-11-19 09:58:11.652624] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:25.162 nvme0n1 00:40:25.162 09:58:11 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:25.162 09:58:11 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:25.162 09:58:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:25.162 09:58:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:25.162 09:58:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:25.162 09:58:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:25.423 09:58:11 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:25.423 09:58:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:25.423 09:58:11 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:25.423 09:58:11 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:25.423 09:58:11 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:25.423 09:58:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:25.423 09:58:11 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:25.423 09:58:12 keyring_linux -- keyring/linux.sh@25 -- # sn=260627782 00:40:25.423 09:58:12 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:25.423 09:58:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:25.423 09:58:12 keyring_linux -- keyring/linux.sh@26 -- # [[ 260627782 == \2\6\0\6\2\7\7\8\2 ]] 00:40:25.423 09:58:12 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 260627782 00:40:25.423 09:58:12 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:25.423 09:58:12 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:25.683 Running I/O for 1 seconds... 00:40:26.627 24383.00 IOPS, 95.25 MiB/s 00:40:26.627 Latency(us) 00:40:26.627 [2024-11-19T08:58:13.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:26.627 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:26.627 nvme0n1 : 1.01 24383.77 95.25 0.00 0.00 5233.64 4287.15 12014.93 00:40:26.627 [2024-11-19T08:58:13.375Z] =================================================================================================================== 00:40:26.627 [2024-11-19T08:58:13.375Z] Total : 24383.77 95.25 0.00 0.00 5233.64 4287.15 12014.93 00:40:26.627 { 00:40:26.627 "results": [ 00:40:26.627 { 00:40:26.627 "job": "nvme0n1", 00:40:26.627 "core_mask": "0x2", 00:40:26.627 "workload": "randread", 00:40:26.627 "status": "finished", 00:40:26.627 "queue_depth": 128, 00:40:26.627 "io_size": 4096, 00:40:26.627 "runtime": 1.005218, 00:40:26.627 "iops": 24383.765511560676, 00:40:26.627 "mibps": 95.24908402953389, 00:40:26.627 "io_failed": 0, 00:40:26.627 "io_timeout": 0, 00:40:26.627 "avg_latency_us": 5233.6405630125255, 00:40:26.627 "min_latency_us": 4287.1466666666665, 00:40:26.627 "max_latency_us": 12014.933333333332 00:40:26.627 } 00:40:26.627 ], 00:40:26.627 "core_count": 1 00:40:26.627 } 00:40:26.627 09:58:13 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:26.627 09:58:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:26.888 09:58:13 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:26.888 09:58:13 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:26.888 09:58:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:26.888 09:58:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:26.888 09:58:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:26.888 09:58:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:26.888 09:58:13 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:26.888 09:58:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:26.888 09:58:13 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:26.888 09:58:13 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:26.888 09:58:13 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:40:26.889 09:58:13 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:26.889 09:58:13 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:26.889 09:58:13 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:26.889 09:58:13 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:26.889 09:58:13 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:26.889 09:58:13 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:26.889 09:58:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:27.149 [2024-11-19 09:58:13.717821] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:27.149 [2024-11-19 09:58:13.718165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d6480 (107): Transport endpoint is not connected 00:40:27.149 [2024-11-19 09:58:13.719155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d6480 (9): Bad file descriptor 00:40:27.149 [2024-11-19 09:58:13.720157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:27.149 [2024-11-19 09:58:13.720168] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:27.149 [2024-11-19 09:58:13.720173] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:27.149 [2024-11-19 09:58:13.720180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:27.149 request: 00:40:27.149 { 00:40:27.149 "name": "nvme0", 00:40:27.149 "trtype": "tcp", 00:40:27.149 "traddr": "127.0.0.1", 00:40:27.149 "adrfam": "ipv4", 00:40:27.149 "trsvcid": "4420", 00:40:27.149 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:27.149 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:27.149 "prchk_reftag": false, 00:40:27.149 "prchk_guard": false, 00:40:27.149 "hdgst": false, 00:40:27.149 "ddgst": false, 00:40:27.149 "psk": ":spdk-test:key1", 00:40:27.149 "allow_unrecognized_csi": false, 00:40:27.149 "method": "bdev_nvme_attach_controller", 00:40:27.149 "req_id": 1 00:40:27.149 } 00:40:27.149 Got JSON-RPC error response 00:40:27.149 response: 00:40:27.149 { 00:40:27.149 "code": -5, 00:40:27.149 "message": "Input/output error" 00:40:27.149 } 00:40:27.149 09:58:13 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:40:27.149 09:58:13 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:27.149 09:58:13 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:27.149 09:58:13 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:27.149 09:58:13 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:27.149 09:58:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:27.149 09:58:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:27.150 09:58:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:27.150 09:58:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:27.150 09:58:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:27.150 09:58:13 keyring_linux -- keyring/linux.sh@33 -- # sn=260627782 00:40:27.150 09:58:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 260627782 00:40:27.150 1 links removed 00:40:27.150 09:58:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:27.150 09:58:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:27.150 09:58:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:27.150 09:58:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:27.150 09:58:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:27.150 09:58:13 keyring_linux -- keyring/linux.sh@33 -- # sn=341176663 00:40:27.150 09:58:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 341176663 00:40:27.150 1 links removed 00:40:27.150 09:58:13 keyring_linux -- keyring/linux.sh@41 -- # killprocess 673298 00:40:27.150 09:58:13 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 673298 ']' 00:40:27.150 09:58:13 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 673298 00:40:27.150 09:58:13 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:40:27.150 09:58:13 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:27.150 09:58:13 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 673298 00:40:27.150 09:58:13 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:27.150 09:58:13 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:27.150 09:58:13 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 673298' 00:40:27.150 killing process with pid 673298 00:40:27.150 09:58:13 keyring_linux -- common/autotest_common.sh@973 -- # kill 673298 00:40:27.150 Received shutdown signal, test time was about 1.000000 seconds 00:40:27.150 00:40:27.150 Latency(us) 00:40:27.150 [2024-11-19T08:58:13.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:27.150 [2024-11-19T08:58:13.898Z] =================================================================================================================== 00:40:27.150 [2024-11-19T08:58:13.898Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:27.150 09:58:13 keyring_linux -- common/autotest_common.sh@978 -- # wait 673298 00:40:27.410 09:58:13 keyring_linux -- keyring/linux.sh@42 -- # killprocess 672978 00:40:27.410 09:58:13 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 672978 ']' 00:40:27.410 09:58:13 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 672978 00:40:27.410 09:58:13 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:40:27.410 09:58:13 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:27.410 09:58:13 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 672978 00:40:27.410 09:58:13 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:27.410 09:58:13 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:27.410 09:58:13 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 672978' 00:40:27.410 killing process with pid 672978 00:40:27.410 09:58:13 keyring_linux -- common/autotest_common.sh@973 -- # kill 672978 00:40:27.410 09:58:13 keyring_linux -- common/autotest_common.sh@978 -- # wait 672978 00:40:27.670 00:40:27.670 real 0m5.215s 00:40:27.670 user 0m9.650s 00:40:27.670 sys 0m1.460s 00:40:27.670 09:58:14 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:27.670 09:58:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:27.670 ************************************ 00:40:27.670 END TEST keyring_linux 00:40:27.670 ************************************ 00:40:27.670 09:58:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:40:27.670 09:58:14 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:40:27.670 09:58:14 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:40:27.670 09:58:14 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:40:27.670 09:58:14 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:40:27.670 09:58:14 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:40:27.670 09:58:14 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:40:27.670 09:58:14 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:40:27.670 09:58:14 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:40:27.670 09:58:14 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:40:27.670 09:58:14 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:40:27.670 09:58:14 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:40:27.671 09:58:14 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:40:27.671 09:58:14 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:40:27.671 09:58:14 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:40:27.671 09:58:14 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:40:27.671 09:58:14 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:40:27.671 09:58:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:27.671 09:58:14 -- common/autotest_common.sh@10 -- # set +x 00:40:27.671 09:58:14 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:40:27.671 09:58:14 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:40:27.671 09:58:14 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:40:27.671 09:58:14 -- common/autotest_common.sh@10 -- # set +x 00:40:35.811 INFO: APP EXITING 00:40:35.811 INFO: killing all VMs 00:40:35.811 INFO: killing vhost app 00:40:35.811 INFO: EXIT DONE 00:40:38.358 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:40:38.358 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:40:38.358 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:40:38.358 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:40:38.620 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:40:38.620 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:40:38.620 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:40:38.620 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:40:38.620 0000:65:00.0 (144d a80a): Already using the nvme driver 00:40:38.620 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:40:38.620 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:40:38.620 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:40:38.620 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:40:38.620 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:40:38.620 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:40:38.880 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:40:38.880 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:40:43.090 Cleaning 00:40:43.090 Removing: /var/run/dpdk/spdk0/config 00:40:43.090 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:43.090 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:43.090 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:43.090 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:43.090 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:43.090 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:43.090 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:43.090 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:43.090 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:43.090 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:43.090 Removing: /var/run/dpdk/spdk1/config 00:40:43.090 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:43.090 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:43.090 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:43.090 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:43.090 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:43.090 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:43.090 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:43.090 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:43.090 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:43.090 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:43.090 Removing: /var/run/dpdk/spdk2/config 00:40:43.090 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:43.090 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:43.090 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:43.090 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:43.090 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:43.090 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:43.090 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:43.090 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:43.090 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:43.090 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:43.090 Removing: /var/run/dpdk/spdk3/config 00:40:43.090 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:43.090 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:43.090 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:43.090 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:43.090 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:43.090 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:43.090 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:43.090 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:43.090 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:43.090 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:43.090 Removing: /var/run/dpdk/spdk4/config 00:40:43.090 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:43.090 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:43.090 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:43.090 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:43.090 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:43.090 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:43.090 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:43.090 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:43.090 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:43.090 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:43.090 Removing: /dev/shm/bdev_svc_trace.1 00:40:43.090 Removing: /dev/shm/nvmf_trace.0 00:40:43.090 Removing: /dev/shm/spdk_tgt_trace.pid94375 00:40:43.090 Removing: /var/run/dpdk/spdk0 00:40:43.090 Removing: /var/run/dpdk/spdk1 00:40:43.090 Removing: /var/run/dpdk/spdk2 00:40:43.090 Removing: /var/run/dpdk/spdk3 00:40:43.090 Removing: /var/run/dpdk/spdk4 00:40:43.090 Removing: /var/run/dpdk/spdk_pid100407 00:40:43.090 Removing: /var/run/dpdk/spdk_pid100980 00:40:43.090 Removing: /var/run/dpdk/spdk_pid101322 00:40:43.090 Removing: /var/run/dpdk/spdk_pid101686 00:40:43.090 Removing: /var/run/dpdk/spdk_pid101999 00:40:43.090 Removing: /var/run/dpdk/spdk_pid102280 00:40:43.090 Removing: /var/run/dpdk/spdk_pid102630 00:40:43.090 Removing: /var/run/dpdk/spdk_pid103019 00:40:43.090 Removing: /var/run/dpdk/spdk_pid104085 00:40:43.090 Removing: /var/run/dpdk/spdk_pid107672 00:40:43.090 Removing: /var/run/dpdk/spdk_pid108035 00:40:43.090 Removing: /var/run/dpdk/spdk_pid108377 00:40:43.090 Removing: /var/run/dpdk/spdk_pid108410 00:40:43.090 Removing: /var/run/dpdk/spdk_pid108781 00:40:43.090 Removing: /var/run/dpdk/spdk_pid109013 00:40:43.090 Removing: /var/run/dpdk/spdk_pid109492 00:40:43.090 Removing: /var/run/dpdk/spdk_pid109572 00:40:43.090 Removing: /var/run/dpdk/spdk_pid109870 00:40:43.090 Removing: /var/run/dpdk/spdk_pid110200 00:40:43.090 Removing: /var/run/dpdk/spdk_pid110274 00:40:43.090 Removing: /var/run/dpdk/spdk_pid110581 00:40:43.090 Removing: /var/run/dpdk/spdk_pid111026 00:40:43.090 Removing: /var/run/dpdk/spdk_pid111375 00:40:43.090 Removing: /var/run/dpdk/spdk_pid111782 00:40:43.090 Removing: /var/run/dpdk/spdk_pid116312 00:40:43.090 Removing: /var/run/dpdk/spdk_pid121684 00:40:43.090 Removing: /var/run/dpdk/spdk_pid133768 00:40:43.090 Removing: /var/run/dpdk/spdk_pid134456 00:40:43.090 Removing: /var/run/dpdk/spdk_pid139696 00:40:43.090 Removing: /var/run/dpdk/spdk_pid140197 00:40:43.090 Removing: /var/run/dpdk/spdk_pid145268 00:40:43.090 Removing: /var/run/dpdk/spdk_pid152925 00:40:43.090 Removing: /var/run/dpdk/spdk_pid156050 00:40:43.090 Removing: /var/run/dpdk/spdk_pid168604 00:40:43.090 Removing: /var/run/dpdk/spdk_pid179617 00:40:43.090 Removing: /var/run/dpdk/spdk_pid181637 00:40:43.090 Removing: /var/run/dpdk/spdk_pid182727 00:40:43.090 Removing: /var/run/dpdk/spdk_pid203596 00:40:43.090 Removing: /var/run/dpdk/spdk_pid208953 00:40:43.090 Removing: /var/run/dpdk/spdk_pid265119 00:40:43.090 Removing: /var/run/dpdk/spdk_pid271518 00:40:43.090 Removing: /var/run/dpdk/spdk_pid278697 00:40:43.090 Removing: /var/run/dpdk/spdk_pid286566 00:40:43.090 Removing: /var/run/dpdk/spdk_pid286596 00:40:43.090 Removing: /var/run/dpdk/spdk_pid287603 00:40:43.090 Removing: /var/run/dpdk/spdk_pid288610 00:40:43.090 Removing: /var/run/dpdk/spdk_pid289615 00:40:43.090 Removing: /var/run/dpdk/spdk_pid290287 00:40:43.090 Removing: /var/run/dpdk/spdk_pid290289 00:40:43.090 Removing: /var/run/dpdk/spdk_pid290625 00:40:43.090 Removing: /var/run/dpdk/spdk_pid290651 00:40:43.090 Removing: /var/run/dpdk/spdk_pid290764 00:40:43.091 Removing: /var/run/dpdk/spdk_pid291832 00:40:43.091 Removing: /var/run/dpdk/spdk_pid292871 00:40:43.091 Removing: /var/run/dpdk/spdk_pid293953 00:40:43.091 Removing: /var/run/dpdk/spdk_pid294609 00:40:43.091 Removing: /var/run/dpdk/spdk_pid294651 00:40:43.091 Removing: /var/run/dpdk/spdk_pid294972 00:40:43.091 Removing: /var/run/dpdk/spdk_pid296429 00:40:43.091 Removing: /var/run/dpdk/spdk_pid297535 00:40:43.091 Removing: /var/run/dpdk/spdk_pid308061 00:40:43.091 Removing: /var/run/dpdk/spdk_pid342730 00:40:43.091 Removing: /var/run/dpdk/spdk_pid348244 00:40:43.091 Removing: /var/run/dpdk/spdk_pid350705 00:40:43.091 Removing: /var/run/dpdk/spdk_pid353001 00:40:43.091 Removing: /var/run/dpdk/spdk_pid353153 00:40:43.091 Removing: /var/run/dpdk/spdk_pid353415 00:40:43.091 Removing: /var/run/dpdk/spdk_pid353754 00:40:43.091 Removing: /var/run/dpdk/spdk_pid354479 00:40:43.091 Removing: /var/run/dpdk/spdk_pid356817 00:40:43.091 Removing: /var/run/dpdk/spdk_pid357917 00:40:43.091 Removing: /var/run/dpdk/spdk_pid358603 00:40:43.091 Removing: /var/run/dpdk/spdk_pid361164 00:40:43.091 Removing: /var/run/dpdk/spdk_pid362021 00:40:43.091 Removing: /var/run/dpdk/spdk_pid362734 00:40:43.091 Removing: /var/run/dpdk/spdk_pid367792 00:40:43.091 Removing: /var/run/dpdk/spdk_pid374498 00:40:43.091 Removing: /var/run/dpdk/spdk_pid374499 00:40:43.091 Removing: /var/run/dpdk/spdk_pid374500 00:40:43.091 Removing: /var/run/dpdk/spdk_pid379181 00:40:43.091 Removing: /var/run/dpdk/spdk_pid389757 00:40:43.091 Removing: /var/run/dpdk/spdk_pid394684 00:40:43.091 Removing: /var/run/dpdk/spdk_pid402302 00:40:43.091 Removing: /var/run/dpdk/spdk_pid403754 00:40:43.091 Removing: /var/run/dpdk/spdk_pid405402 00:40:43.091 Removing: /var/run/dpdk/spdk_pid407258 00:40:43.091 Removing: /var/run/dpdk/spdk_pid412702 00:40:43.091 Removing: /var/run/dpdk/spdk_pid418091 00:40:43.091 Removing: /var/run/dpdk/spdk_pid423131 00:40:43.091 Removing: /var/run/dpdk/spdk_pid432225 00:40:43.091 Removing: /var/run/dpdk/spdk_pid432231 00:40:43.091 Removing: /var/run/dpdk/spdk_pid437304 00:40:43.091 Removing: /var/run/dpdk/spdk_pid437613 00:40:43.091 Removing: /var/run/dpdk/spdk_pid437944 00:40:43.091 Removing: /var/run/dpdk/spdk_pid438286 00:40:43.091 Removing: /var/run/dpdk/spdk_pid438423 00:40:43.091 Removing: /var/run/dpdk/spdk_pid443991 00:40:43.091 Removing: /var/run/dpdk/spdk_pid444671 00:40:43.091 Removing: /var/run/dpdk/spdk_pid449998 00:40:43.091 Removing: /var/run/dpdk/spdk_pid453912 00:40:43.091 Removing: /var/run/dpdk/spdk_pid460316 00:40:43.091 Removing: /var/run/dpdk/spdk_pid466871 00:40:43.091 Removing: /var/run/dpdk/spdk_pid477075 00:40:43.091 Removing: /var/run/dpdk/spdk_pid485734 00:40:43.091 Removing: /var/run/dpdk/spdk_pid485776 00:40:43.091 Removing: /var/run/dpdk/spdk_pid509720 00:40:43.091 Removing: /var/run/dpdk/spdk_pid510553 00:40:43.091 Removing: /var/run/dpdk/spdk_pid511239 00:40:43.091 Removing: /var/run/dpdk/spdk_pid511925 00:40:43.091 Removing: /var/run/dpdk/spdk_pid512987 00:40:43.091 Removing: /var/run/dpdk/spdk_pid513677 00:40:43.091 Removing: /var/run/dpdk/spdk_pid514524 00:40:43.091 Removing: /var/run/dpdk/spdk_pid515306 00:40:43.091 Removing: /var/run/dpdk/spdk_pid520409 00:40:43.091 Removing: /var/run/dpdk/spdk_pid520740 00:40:43.091 Removing: /var/run/dpdk/spdk_pid527797 00:40:43.091 Removing: /var/run/dpdk/spdk_pid528149 00:40:43.091 Removing: /var/run/dpdk/spdk_pid534615 00:40:43.091 Removing: /var/run/dpdk/spdk_pid539687 00:40:43.091 Removing: /var/run/dpdk/spdk_pid551312 00:40:43.091 Removing: /var/run/dpdk/spdk_pid552066 00:40:43.091 Removing: /var/run/dpdk/spdk_pid557569 00:40:43.091 Removing: /var/run/dpdk/spdk_pid557922 00:40:43.091 Removing: /var/run/dpdk/spdk_pid562964 00:40:43.091 Removing: /var/run/dpdk/spdk_pid569704 00:40:43.091 Removing: /var/run/dpdk/spdk_pid572750 00:40:43.091 Removing: /var/run/dpdk/spdk_pid584917 00:40:43.091 Removing: /var/run/dpdk/spdk_pid595380 00:40:43.352 Removing: /var/run/dpdk/spdk_pid597308 00:40:43.352 Removing: /var/run/dpdk/spdk_pid598335 00:40:43.352 Removing: /var/run/dpdk/spdk_pid618545 00:40:43.352 Removing: /var/run/dpdk/spdk_pid623229 00:40:43.352 Removing: /var/run/dpdk/spdk_pid626464 00:40:43.352 Removing: /var/run/dpdk/spdk_pid634157 00:40:43.352 Removing: /var/run/dpdk/spdk_pid634162 00:40:43.352 Removing: /var/run/dpdk/spdk_pid640157 00:40:43.352 Removing: /var/run/dpdk/spdk_pid642553 00:40:43.352 Removing: /var/run/dpdk/spdk_pid644774 00:40:43.352 Removing: /var/run/dpdk/spdk_pid646255 00:40:43.352 Removing: /var/run/dpdk/spdk_pid648594 00:40:43.352 Removing: /var/run/dpdk/spdk_pid649977 00:40:43.352 Removing: /var/run/dpdk/spdk_pid660531 00:40:43.352 Removing: /var/run/dpdk/spdk_pid661171 00:40:43.352 Removing: /var/run/dpdk/spdk_pid661837 00:40:43.352 Removing: /var/run/dpdk/spdk_pid664798 00:40:43.352 Removing: /var/run/dpdk/spdk_pid665262 00:40:43.352 Removing: /var/run/dpdk/spdk_pid665819 00:40:43.352 Removing: /var/run/dpdk/spdk_pid670663 00:40:43.352 Removing: /var/run/dpdk/spdk_pid670713 00:40:43.352 Removing: /var/run/dpdk/spdk_pid672529 00:40:43.352 Removing: /var/run/dpdk/spdk_pid672978 00:40:43.352 Removing: /var/run/dpdk/spdk_pid673298 00:40:43.352 Removing: /var/run/dpdk/spdk_pid92887 00:40:43.352 Removing: /var/run/dpdk/spdk_pid94375 00:40:43.352 Removing: /var/run/dpdk/spdk_pid95219 00:40:43.352 Removing: /var/run/dpdk/spdk_pid96264 00:40:43.352 Removing: /var/run/dpdk/spdk_pid96604 00:40:43.352 Removing: /var/run/dpdk/spdk_pid97703 00:40:43.352 Removing: /var/run/dpdk/spdk_pid97923 00:40:43.352 Removing: /var/run/dpdk/spdk_pid98248 00:40:43.352 Removing: /var/run/dpdk/spdk_pid99385 00:40:43.352 Clean 00:40:43.352 09:58:30 -- common/autotest_common.sh@1453 -- # return 0 00:40:43.352 09:58:30 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:40:43.353 09:58:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:43.353 09:58:30 -- common/autotest_common.sh@10 -- # set +x 00:40:43.353 09:58:30 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:40:43.353 09:58:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:43.353 09:58:30 -- common/autotest_common.sh@10 -- # set +x 00:40:43.614 09:58:30 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:43.615 09:58:30 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:43.615 09:58:30 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:43.615 09:58:30 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:40:43.615 09:58:30 -- spdk/autotest.sh@398 -- # hostname 00:40:43.615 09:58:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:43.615 geninfo: WARNING: invalid characters removed from testname! 00:41:10.201 09:58:55 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:12.110 09:58:58 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:14.651 09:59:00 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:16.033 09:59:02 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:17.413 09:59:04 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:19.323 09:59:05 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:20.703 09:59:07 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:41:20.703 09:59:07 -- spdk/autorun.sh@1 -- $ timing_finish 00:41:20.703 09:59:07 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:41:20.703 09:59:07 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:41:20.703 09:59:07 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:41:20.703 09:59:07 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:20.703 + [[ -n 6897 ]] 00:41:20.703 + sudo kill 6897 00:41:20.714 [Pipeline] } 00:41:20.729 [Pipeline] // stage 00:41:20.737 [Pipeline] } 00:41:20.747 [Pipeline] // timeout 00:41:20.752 [Pipeline] } 00:41:20.761 [Pipeline] // catchError 00:41:20.767 [Pipeline] } 00:41:20.779 [Pipeline] // wrap 00:41:20.783 [Pipeline] } 00:41:20.796 [Pipeline] // catchError 00:41:20.805 [Pipeline] stage 00:41:20.807 [Pipeline] { (Epilogue) 00:41:20.820 [Pipeline] catchError 00:41:20.823 [Pipeline] { 00:41:20.840 [Pipeline] echo 00:41:20.842 Cleanup processes 00:41:20.847 [Pipeline] sh 00:41:21.138 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:21.138 686316 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:21.153 [Pipeline] sh 00:41:21.441 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:21.441 ++ grep -v 'sudo pgrep' 00:41:21.441 ++ awk '{print $1}' 00:41:21.441 + sudo kill -9 00:41:21.441 + true 00:41:21.452 [Pipeline] sh 00:41:21.739 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:33.978 [Pipeline] sh 00:41:34.269 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:34.269 Artifacts sizes are good 00:41:34.285 [Pipeline] archiveArtifacts 00:41:34.292 Archiving artifacts 00:41:34.756 [Pipeline] sh 00:41:35.101 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:35.141 [Pipeline] cleanWs 00:41:35.154 [WS-CLEANUP] Deleting project workspace... 00:41:35.154 [WS-CLEANUP] Deferred wipeout is used... 00:41:35.173 [WS-CLEANUP] done 00:41:35.175 [Pipeline] } 00:41:35.193 [Pipeline] // catchError 00:41:35.204 [Pipeline] sh 00:41:35.551 + logger -p user.info -t JENKINS-CI 00:41:35.562 [Pipeline] } 00:41:35.576 [Pipeline] // stage 00:41:35.581 [Pipeline] } 00:41:35.596 [Pipeline] // node 00:41:35.602 [Pipeline] End of Pipeline 00:41:35.635 Finished: SUCCESS